Interfaces as a Scarce Resource
post by johnswentworth · 2020-03-05T18:20:26.733Z · LW · GW · 15 commentsContents
Don Norman’s Fridge Interface Design When And Why Is It Hard? AR vs VR Alignment Markets and Contractability Interfaces in Organizations Summary None 15 comments
Outline:
- The first three sections (Don Norman’s Fridge, Interface Design, and When And Why Is It Hard?) cover what we mean by “interface”, what it looks like for interfaces to be scarce, and the kinds of areas where they tend to be scarce.
- The next four sections apply these ideas to various topics:
- Why AR is much more difficult than VR
- AI alignment from an interface-design perspective
- Good interfaces as a key bottleneck to creation of markets
- Cross-department interfaces in organizations
Don Norman’s Fridge
Don Norman (known for popularizing the term “affordance” in The Design of Everyday Things) offers a story about the temperature controls on his old fridge:
I used to own an ordinary, two-compartment refrigerator - nothing very fancy about it. The problem was that I couldn’t set the temperature properly. There were only two things to do: adjust the temperature of the freezer compartment and adjust the temperature of the fresh food compartment. And there were two controls, one labeled “freezer”, the other “refrigerator”. What’s the problem?
Oh, perhaps I’d better warn you. The two controls are not independent. The freezer control also affects the fresh food temperature, and the fresh food control also affects the freezer.
The natural human model of the refrigerator is: there’s two compartments, and we want to control their temperatures independently. Yet the fridge, apparently, does not work like that. Why not? Norman:
In fact, there is only one thermostat and only one cooling mechanism. One control adjusts the thermostat setting, the other the relative proportion of cold air sent to each of the two compartments of the refrigerator.
It’s not hard to imagine why this would be a good design for a cheap fridge: it requires only one cooling mechanism and only one thermostat. Resources are saved by not duplicating components - at the cost of confused customers.
The root problem in this scenario is a mismatch between the structure of the machine (one thermostat, adjustable allocation of cooling power) and the structure of what-humans-want (independent temperature control of two compartments). In order to align the behavior of the fridge with the behavior humans want, somebody, at some point, needs to do the work of translating between the two structures. In Norman’s fridge example, the translation is botched, and confusion results.
We’ll call whatever method/tool is used for translating between structures an interface. Creating good methods/tools for translating between structures, then, is interface design.
Interface Design
In programming, the analogous problem is API design: taking whatever data structures are used by a software tool internally, and figuring out how to present them to external programmers in a useful, intelligible way. If there’s a mismatch between the internal structure of the system and the structure of what-users-want, then it’s the API designer’s job to translate. A “good” API is one which handles the translation well.
User interface design is a more general version of the same problem: take whatever structures are used by a tool internally, and figure out how to present them to external users in a useful, intelligible way. Conceptually, the only difference from API design is that we no longer assume our users are programmers interacting with the tool via code. We design the interface to fit however people use it - that could mean handles on doors, or buttons and icons in a mobile app, or the temperature knobs on a fridge.
Economically, interface design is a necessary input to make all sorts of things economically useful. How scarce is that input? How much are people willing to spend for good interface design?
My impression is: a lot. There’s an entire category of tech companies whose business model is:
- Find a software tool or database which is very useful but has a bad interface
- Build a better interface to the same tool/database
- …
- Profit
This is an especially common pattern among small but profitable software companies; it’s the sort of thing where a small team can build a tool and then lock in a small number of very loyal high-paying users. It’s a good value prop - you go to people or businesses who need to use X, but find it a huge pain, and say “here, this will make it much easier to use X”. Some examples:
- Companies which interface to government systems to provide tax services, travel visas, patenting, or business licensing
- Companies which set up websites, Salesforce, corporate structure, HR services, or shipping logistics for small business owners with little relevant expertise
- Companies which provide graphical interfaces for data, e.g. website traffic, sales funnels, government contracts, or market fundamentals
Even bigger examples can be found outside of tech, where humans themselves serve as an interface. Entire industries consist of people serving as interfaces.
What does this look like? It’s the entire industry of tax accountants, or contract law, or lobbying. It’s any industry where you could just do it yourself in principle, but the system is complicated and confusing, so it’s useful to have an expert around to translate the things-you-want into the structure of the system.
In some sense, the entire field of software engineering is an example. A software engineer’s primary job is to translate the things-humans-want into a language understandable by computers. People use software engineers because talking to the engineer (difficult though that may be) is an easier interface than an empty file in Jupyter.
These are not cheap industries. Lawyers, accountants, lobbyists, programmers… these are experts in complicated systems, and they get paid accordingly. The world spends large amounts of resources using people as interfaces - indicating that these kinds of interfaces are a very scarce resource.
When And Why Is It Hard?
Don Norman’s work is full of interesting examples and general techniques for accurately communicating the internal structure of a tool to users - the classic example is “handle means pull, flat plate means push” on a door. At this point, I think (at least some) people have a pretty good understanding of these techniques, and they’re spreading over time. But accurate communication of a system’s internal structure is only useful if the system’s internal structure is itself pretty simple - like a door or a fridge. If I want to, say, write a contract, then I need to interface to the system of contract law; accurately communicating that structure would take a whole book, even just to summarize key pieces.
There are lots of systems which are simple enough that accurate communication is the bulk of the problem of interface design - this includes most everyday objects (like fridges), as well as most websites or mobile apps.
But the places where we see expensive industries providing interfaces - like law or software - are usually the cases where the underlying system is more complex. These are cases where the structure of what-humans-want is very different from the system’s structure, and translating between the two requires study and practice. Accurate communication of the system’s internal structure is not enough to make the problem easy.
In other words: interfaces to complex systems are especially scarce. This economic constraint is very taut, across a number of different areas. We see entire industries - large industries - whose main purpose is to provide non-expert humans with an interface to a particular complex system.
Given that interfaces to complex systems are a scarce resource in general, what other predictions would we make? What else would we expect to be hard/expensive, as a result of interfaces to complex systems being hard/expensive?
AR vs VR
By the standards of software engineering, pretty much anything in the real world is complex. Interfacing to the real world means we don’t get to choose the ontology - we can make up a bunch of object types and data structures, but the real world will not consider itself obligated to follow them. The internal structure of computers or programming languages is rarely a perfect match to the structure of the real world.
Interfacing the real world to computers, then, is an area we’d expect to be difficult and expensive.
Augmented reality (AR) is one area where I expect this to be keenly felt, especially compared to VR. I expect AR applications to lag dramatically behind full virtual reality, in terms of both adoption and product quality. I expect AR will mostly be used in stable, controlled environments - e.g. factory floors or escape-room-style on-location games.
Why is interfacing software with the real world hard? Some standard answers:
- The real world is complicated. This is a cop-out answer which doesn’t actually explain anything.
- The real world has lots of edge cases. This is also a cop-out, but more subtly; the real world will only seem to be full of edge cases if our program’s ontologies don’t line up well with reality. The real question: why is it hard to make our ontologies line up well with reality?
Some more interesting answers:
- The real world isn’t implemented in Python. To the extent that the real world has a language, that language is math. As software needs to interface more with the real world, it’s going to require more math - as we see in data science, for instance - and not all of that math will be easy to black-box and hide behind an API.
- The real world is only partially observable - even with ubiquitous sensors, we can’t query anything anytime the way we can with e.g. a database. Explicitly modelling things we can’t directly observe will become more important over time, which means more reliance on probability and ML tools (though I don’t think black-box methods or “programming by example” will expand beyond niche applications).
- We need enough compute to actually run all that math. In practice, I think this constraint is less taut than it first seems - we should generally be able to perform at least as well as a human without brute-forcing exponentially hard problems. That said, we do still need efficient algorithms.
- The real-world things we are interested in are abstract, high-level objects. At this point, we don’t even have the mathematical tools to work with these kinds of fuzzy abstractions.
- We don’t directly control the real world. Virtual worlds can be built to satisfy various assumptions by design; the real world can’t.
- Combining the previous points: we don’t have good ways to represent our models of the real world, or to describe what we want in the real world.
- Software engineers are mostly pretty bad at describing what they want and building ontologies which line up with the real world. These are hard skills to develop, and few programmers explicitly realize that they need to develop them.
Alignment
Continuing the discussion from the previous section, let’s take the same problems in a different direction. We said that translating what-humans-want-in-the-real-world into a language usable by computers is hard/expensive. That’s basically the AI alignment problem. Does the interfaces-as-scarce-resource view lend any interesting insight there?
First, this view immediately suggests some simple analogues for the AI alignment problem. The “Norman’s fridge alignment problem” is one - it’s surprisingly difficult to get a fridge to do what we want, when the internal structure of the fridge doesn’t match the structure of what we want. Now consider the internal structure of, say, a neural network - how well does that match the structure of what we want? It’s not hard to imagine that a neural network would run into a billion-times-more-difficult version of the fridge alignment problem.
Another analogue is the “Ethereum alignment problem”: we can code up a smart contract to give monetary rewards for anything our code can recognize. Yet it’s still difficult to specify a contract for exactly the things we actually want. This is essentially the AI alignment problem, except we use a market in place of an ML-based predictor/optimizer. One interesting corollary of the analogy: there are already economic incentives to find ways of aligning a generic predictor/optimizer. That’s exactly the problem faced by smart contract writers, and by other kinds of contract writers/issuers in the economy. How strong are those incentives? What do the rewards for success look like - are smart contracts only a small part of the economy because the rewards are meager, or because the problems are hard? More discussion of the topic in the next section.
Moving away from analogues of alignment, what about alignment paths/strategies?
I think there’s a plausible (though not very probable) path to general artificial intelligence in which:
- We figure out various core theoretical problems, e.g. abstraction [? · GW], pointers to values [LW · GW], embedded decision theory [? · GW], …
- The key theoretical insights are incorporated into new programming languages and frameworks
- Programmers can more easily translate what-they-want-in-the-real-world into code, and make/use models of the world which better line up with the structure of reality
- … and this creates a smooth-ish path of steadily-more-powerful declarative programming tools which eventually leads to full AGI
To be clear, I don't see a complete roadmap yet for this path; the list of theoretical problems is not complete, and a lot of progress would be needed in non-agenty mathematical modelling as well. But even if this path isn’t smooth or doesn’t run all the way to AGI, I definitely see a lot of economic pressure for this sort of thing. We are economically bottlenecked on our ability to describe what we want to computers, and anything which relaxes that bottleneck will be very valuable.
Markets and Contractability
The previous section mentioned the Ethereum alignment problem: we can code up a smart contract to give monetary rewards for anything our code can recognize, yet it’s still difficult to specify a contract for exactly the things we actually want. More generally, it’s hard to create contracts which specify what we want well enough that they can’t be gamed.
(Definitional note: I’m using “contract” here in the broad sense, including pretty much any arrangement for economic transactions - e.g. by eating in a restaurant you implicitly agree to pay the bill later, or boxes in a store implicitly agree to contain what they say on the box. At least in the US, these kinds of contracts are legally binding, and we can sue if they’re broken.)
A full discussion of contract specification goes way beyond interfaces - it’s basically the whole field of contract theory and mechanism design, and encompasses things like adverse selection, signalling, moral hazard, incomplete contracts, and so forth. All of these are techniques and barriers to writing a contract when we can’t specify exactly what we want. But why can’t we specify exactly what we want in the first place? And what happens when we can?
Here’s a good example where we can specify exactly what we want: buying gasoline. The product is very standardized, the measures (liters or gallons) are very standardized, so it’s very easy to say “I’m buying X liters of type Y gas at time and place Z” - existing standards will fill in the remaining ambiguity. That’s a case where the structure of the real world is not too far off from the structure of what-we-want - there’s a nice clean interface. Not coincidentally, this product has a very liquid market: many buyers/sellers competing over price of a standardized good. Standard efficient-market economics mostly works.
On the other end of the spectrum, here’s an example where it’s very hard to specify exactly what we want: employing people for intellectual work. It’s hard to outsource expertise [LW · GW] - often, a non-expert doesn’t even know how to tell a job well done from sloppy work. This is a natural consequence of using an expert as an interface to a complicated system. As a result, it’s hard to standardize products, and there’s not a very liquid market. Rather than efficient markets, we have to fall back on the tools of contract theory and mechanism design - we need ways of verifying that the job is done well without being able to just specify exactly what we want.
In the worst case, the tools of contract theory are insufficient, and we may not be able to form a contract at all. The lemon problem is an example: a seller may have a good used car, and a buyer may want to buy a good used car, but there’s no (cheap) way for the seller to prove to the buyer that the car isn’t a lemon - so there’s no transaction. If we could fully specify everything the buyer wants from the car, and the seller could visibly verify that every box is checked, cheaply and efficiently, then this wouldn’t be an issue.
The upshot of all this is that good interfaces - tools for translating the structure of the real world into the structure of what-we-want, and vice versa - enable efficient markets. They enable buying and selling with minimal overhead, and they avoid the expense and complexity of contract-theoretic tools.
Create a good interface for specifying what-people-want within some domain, and you’re most of the way to creating a market.
Interfaces in Organizations
Accurately communicating what we want is hard. Programmers and product designers are especially familiar with this:
Incentives are a problem sometimes (obviously don’t trust ads or salespeople), but even mostly-earnest communicators - customers, project managers, designers, engineers, etc - have a hard time explaining things. In general, people don’t understand which aspects are most relevant to other specialists, or often even which aspects are most relevant to themselves. A designer will explain to a programmer the parts which seem most design-relevant; a programmer will pay attention to the parts which seem most programming-relevant.
It’s not just that the structure of what-humans-want doesn’t match the structure of the real world. It’s that the structure of how-human-specialists-see-the-world varies between different specialists. Whenever two specialists in different areas need to convey what-they-want from one to the other, somebody/something has to do the work of translating between structures - in other words, we need an interface.
A particularly poignant example from several years ago: I overheard a designer and an engineer discuss a minor change to a web page. It went something like this:
Designer: "Ok, I want it just like it was before, but put this part at the top."
Engineer: "Like this?"
Designer: "No, I don't want everything else moved down. Just keep everything else where it was, and put this at the top."
Engineer: "But putting that at the top pushes everything else down."
Designer: "It doesn't need to. Look, just..."
... this went on for about 30 minutes, with steadily increasing frustration on both sides, and steadily increasing thumping noises from my head hitting the desk.
It turned out that the designer's tools built everything from the bottom of the page up, while the engineer's tools built everything from top down. So from the designer's perspective, "put this at the top" did not require moving anything else. But from the engineer's perspective, "put this at the top" meant everything else had to get pushed down.
Somebody/something has to do the translation work. It’s a two-sided interface problem.
Handling these sorts of problems is a core function for managers and for anyone deciding how to structure an organization. It may seem silly to need to loop in, say, a project manager for every conversation between a designer and an engineer - but if the project manager’s job is to translate, then it can be useful. Remember, the example above was frustrating, but at least both sides realized they weren’t communicating successfully - if the double illusion of transparency [LW · GW] kicks in, problems can crop up without anybody even realizing.
This is why, in large organizations, people who can operate across departments are worth their weight in gold. Interfaces are a scarce resource; people who operate across departments can act as human interfaces, translating model-structures between groups.
A great example of this is the 1986 Goldwater-Nichols Act. It was intended to fix a lack of communication/coordination between branches of the US military. The basic idea was simple: nobody could be promoted to lieutenant or higher without first completing a “joint mission”, one in which they worked directly with members of other branches. People capable of serving as interfaces between branches were a scarce resource; Goldwater-Nichols introduced an incentive to create more such people. Before the bill’s introduction, top commanders of all branches argued against it; they saw it as congressional meddling. But after the first Iraq war, every one of them testified that it was the best thing to ever happen to the US military.
Summary
The structure of things-humans-want does not always match the structure of the real world, or the structure of how-other-humans-see-the-world. When structures don’t match, someone or something needs to serve as an interface, translating between the two.
In simple cases, this is just user interface design - accurately communicating how-the-thing-works to users. But when the system is more complicated - like a computer or a body of law - we usually need human specialists to serve as interfaces. Such people are expensive; interfaces to complicated systems are a scarce resource.
15 comments
Comments sorted by top scores.
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-12-29T01:18:36.088Z · LW(p) · GW(p)
What this post does for me is that it encourages me to view products and services not as physical facts of our world, as things that happen to exist, but as the outcomes of an active creative process that is still ongoing and open to our participation. It reminds us that everything we might want to do is hard, and that the work of making that task less hard is valuable. Otherwise, we are liable to make the mistake of taking functionality and expertise for granted.
What is not an interface? That's the slipperiest aspect of this post. A programming language is an interface to machine code, a programmer to the language, a company to the programmer, a liaison to the company, a department to the liaison, a chain of command to the department, a stock to the chain of command, an index fund to the stock, an app to the index fund. Matter itself is an interface. An iron bar is an interface to iron. An aliquot is an interface to a chemical. A fruit is an interface, translating between the structure of a chloroplast and the structure of things-animals-can-eat. A janitor is an interface to brooms and buckets, the layout of the building, and other considerations bearing on the task of cleaning. We have lots of words in this concept-cluster: tools, products, goods and services, control systems, and now "interfaces."
"As a scarce resource," suggests that there are resources that are not interfaces. After all, the implied value prop of this post is that it's suggesting a high-value area for economic activity. But if all economic activity is interface design, then a more accurate title is "Scarce Resources as Interfaces," or "Goods Are Hard To Make And Services Are Hard To Do."
The value I get out of this post is that it shifts my thinking about a tool or service away from the mechanism, and toward the value prop. It's also a useful reminder for an early-career professional that their value prop is making a complex system easier to use for somebody else, rather than ticking the boxes of their course of schooling or acquiring line items on their resume. There are lots of ways to lose one's purpose, especially if it's not crystal clear. This post does a good job of framing jobs not as a series of tasks you do, or an identity you have, but as an attempt to make a certain outcome easier to achieve for your client. It's fundamentally empathic in a way I find appealing.
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-04-08T21:46:27.404Z · LW(p) · GW(p)
This idea helps explain the tension between classroom work and career preparation.
A student building a career as a scientist needs one day to become an interface to their field. A cancer researcher needs to be an interface for several players: their colleagues and lab workers, grant makers, institutional administrators, suppliers, doctors, and patients.
By contrast, a school is an interface to vet and route students according to aptitude + interest into further training programs or jobs. Students may perceive much of what they learn there as "training," when in fact is is a vetting procedure. Hence, they gain misguided notions of where they'd put their time, if their objective was to make themselves into a useful interface for their intended field.
The principle of "becoming an interface" as a way to reclaim this lost purpose can help clarify a lot of scholarship tasks. For example, I strongly considered using flashcards to memorize entire textbooks, under the theory that if coursework is meant to build useful knowledge for scientific work, then the most efficient way to do that was with flashcards. A Fermi estimate indicated I could probably memorize about 1.5 textbooks worth of facts per year, with time left over for deliberate practice.
Yet is that the most useful way for me to build my skill as a "bioengineering interface?" Alternative forms of interface-improvement might include:
- Networking with people in the field, making friends, learning more about particular schools/labs/PIs
- Learning about how grants are made
- Learning broad skills for organizing research, decision making
- Reading articles on the economics, current projects, major medical issues, common techniques
- Specifically investigating gears-level mechanisms for particular problems (aging, for example)
- Read the abstracts of 10 articles per day, and turn each of them into 3 flashcards referring to the primary goal and finding of the paper, so that I can have a shallow but broad grasp of who's doing what and decide where to allocate my own energy based on that.
- Learn general-purpose skills: Linux, programming, productivity software, personal organization.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-24T00:09:06.046Z · LW(p) · GW(p)
Updating 6 months later, now that I've had about 6 weeks in my biomedical engineering MS program.
Yet is that the most useful way for me to build my skill as a "bioengineering interface?"
Absolutely not!
Particularly for a beginner like myself, the first and most crucial step is to demonstrate credibility in lab procedures and calculations. Your colleagues - fellow grad students - need to trust that you're capable of executing lab procedures competently. This allows them, or your PI, to give you projects, trust that you'll make good use of resources, and produce high-quality data that they're prepared to back with their own reputation.
The challenge of establishing your credibility never ends. As you gain competency in one area, you are then permitted to try your hand at more complex tasks.
Eventually, your period of being trained by others comes to an end, and it now becomes your task to develop new procedures and demonstrate that they work. Your ability to do this depends on the credibility you've established along the way. The real test of success here is refining your new methods to the point that other people can replicate them and put them into use for other purposes. Effectively, you create an interface to the method you've been developing.
Credibility is one essential component of this task. Another, though, is creativity, reasoning, collaboration, hard work, scholarship, and design that allow you to design the method in the first place. A person could in theory have the ability to come up with an efficacious method, but lack the credibility to get the resources to attempt it, or to have others trust that the method actually works. It's also possible that a person could be credibly competent at the methods they've mastered in the past, but not have the ability to come up with new ones, or to communicate them effectively to others.
To make yourself into an interface to a scientific field, or to be able to create interfaces within that field, then, takes a combination of both design skills and execution skills.
Scholarship, of the kind I was focused on 6 months ago, is relevant to both of these skills. It helps you understand the mechanisms underpinning the techniques you're trying to master, and also is essential to the design process.
However, it's usually strategic to use the minimum input to get your goals accomplished, and in this case that means the minimum level of scholarship required to master the next technique or design the next project. The proof is in the success of your labor or of your project, not in the amount of facts you've got memorized to back it up in a verbal argument, or to impress the people who are vetting your background.
6 months ago, I lacked so many of the things required to choose or pursue meaningful scientific goals, so I was compensating by focusing on what was available to me: scholarship for its own sake.
Looking back on what I did over the last few years to prepare for grad school, I have learned SO much that would have helped me accelerate my growth if I could communicate it to my past self.
In particular:
- Skills about breaking down and understanding the content of a textbook, paper, or protocol.
- Ways to practice key skills even if you don't have access to the necessary equipment or materials.
- A greater appreciation for labs as a potential training ground for real-world tasks.
- Focusing more on developing competency in basic, routine skills, and less on trying to learn about everything under the sun. A greater appreciation for beginners mind, and for learning before you try to contribute.
- Doing a whole lot more drawing when I take notes.
↑ comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-24T01:04:27.802Z · LW(p) · GW(p)
As a second note, one of the challenges, then, with school, is its mechanism by which it accomplishes vetting.
Vetting is a way of establishing the student's credibility. But it's a far too general form of credibility.
It's the sort of credibility that gets you a foot in the door in another school, or perhaps in an entry-level position.
But once you're there in that new environment, you have no specific credibility with the particular people in the new institution. Suddenly, you may realize that you failed to prepare yourself adequately to build credibility with the people in this new institution. You did what you had to do in your old institution to get become credible enough to be given the opportunity to try to prove your credibility in the new institution. Enough credibility to be permitted to try your hand at a more complicated task.
But the old institution was meant not only to make you credibly ready to try the new task - it was meant to make you credibly able to do the new task.
It focuses the mind of the students, and the student's teachers and advisors, on a form of success (getting admitted to the "next step") that will cease to be a meaningful form of success mere months after it has been accomplished.
A better approach to schooling would be one that helps the student focus on building credibility with the people they'll be working for in the future. That treats coursework like a means to understand the purpose of a protocol or project you're going to be involved in. That treats labs as preparation for actually doing the procedures in a repeated, reliable fashion in your next job or course of schooling.
Instead, the whole mentality of the institution seems geared around being "done with one thing, on to the next thing." It does not almost at all reward reliable, repeatable skill in a given task. It only rewards accomplishing things correctly a single time, and then moving on from it, perhaps forever. Sometimes, it doesn't even reward success - it just rewards showing up. Other times, it rewards success at a task that's unnecessarily difficult, prioritizing abilities, like arbitrary feats of memorization, that just aren't high-priority skills.
It doesn't even suggest that you begin building relationships with people in the real world - only with teachers at your present institution, precisely the set of people to whom the real success of your training will not matter.
Could we potentially adapt schools to integrate better with the "next steps" without fundamentally altering the approach to education? Is there some low-hanging fruit here, once we frame the problem like this? Even if not, can individual students adapt their approach to being schooled in order to improve matters?
As I see it, there are some things a student can do:
- Give themselves lots of reminders about the tangible reality of the next institution or role they'll be jumping to. Meet people, understand the job those people do, identify the basic skills and make sure you're practicing those ahead of time, understand the social challenges they face, and their ambitions. This is about more than "networking." It's about integrating.
- When doing an assignment or reading for a class, focus a lot of time and energy on understanding the relevance of the material to the next step. That will often mean understanding what, in a real-world context, you could safely look up, ignore, or delegate. It also means building the skills to reliably be able to look up and understand new knowledge when necessary, and being able to identify the bits and pieces that are critical to memorize and deeply internalize so as to make navigating the domain relatively easy.
- Focus on getting grades that are good enough to get where you want to go. Do not pride yourself on grades. Pride yourself on real-world abilities and relationships. A's are for admissions boards. Skills and relationships are for you.
- Figure out how to witness, experience, learn and practice skills that use equipment and materials that are inaccessible to you. You can create simulations, mockups, or even just act out the motions of a technique and playact a protocol. Learn how to train yourself effectively when mentorship or materials are inadequate.
comment by habryka (habryka4) · 2020-03-11T20:47:38.672Z · LW(p) · GW(p)
Promoted to curated: I generally think understanding interfaces is really important from a lot of different perspectives, and this post did a really good job illustrating a bunch of considerations around that. I also really liked the concreteness and the examples.
comment by adamShimi · 2021-12-24T11:36:49.211Z · LW(p) · GW(p)
These are not cheap industries. Lawyers, accountants, lobbyists, programmers… these are experts in complicated systems, and they get paid accordingly. The world spends large amounts of resources using people as interfaces - indicating that these kinds of interfaces are a very scarce resource.
Another example that fits the trend is plumber. And a more modern one is prompt engineer.
comment by adamShimi · 2020-03-12T21:03:36.378Z · LW(p) · GW(p)
Great post! This makes me think of the problem of specification in formal methods: what you managed to formalize is not necessarily what you wanted to formalize. This is why certified software is only as good as the specification that was used for this certification. And that's one of my main intuitions about the issues of AI safety.
One part of the problem of specification is probably about interfacing, like you write, between the maths of the real world and the maths we can understand/certify. But one thing I feel was not mentioned here is the issue of what I would call unknown ambiguity. One of the biggest difficulties of proving properties of programs and algorithms, is that many parts of the behavior are considered obvious by the designer. Think something like the number of processes cannot be 0, or this variable will never take this value, even if its of the right type. Most of the times, when you add these obvious parts, you can finish the proof. But sometimes the "trivial" was hiding the real problem, which breaks the whole thing.
So I think another scarce resource are people that can explicit all the bits in the system. People that can go to all the nitpick, and rebuild everything from scratch.
comment by romeostevensit · 2020-03-06T22:52:13.916Z · LW(p) · GW(p)
I greatly enjoyed this. I see Apple, Microsoft, Google, and FB as interface companies first. Amazon too but in a different way.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-03-15T00:16:45.357Z · LW(p) · GW(p)
I feel like if we count Amazon as an interface company, then we're going to have to count pretty much everything as an interface company, and the concept becomes trivial. If Amazon is an interface between factories and consumers, then factories are an interface between raw materials and Amazon, and Rio Tinto is an interface between Mother Earth and factories.
Replies from: romeostevensit↑ comment by romeostevensit · 2020-03-15T01:26:47.501Z · LW(p) · GW(p)
Yeah but I sort of endorse thinking like this.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-03-15T12:25:03.539Z · LW(p) · GW(p)
But it's not novel, though? Like, I feel like everyone already knows it's important to think about companies in terms of what they do, i.e. in terms of what they take as inputs and then what they produce as outputs.
Replies from: romeostevensit↑ comment by romeostevensit · 2020-03-15T20:36:17.296Z · LW(p) · GW(p)
I've experienced insight from holding a sort of strong behaviorist frame, treating the normal contents as black boxes and focusing entirely on inputs and outputs. It's sort of like switching between oo and functionalist programming views of problems.
comment by Pattern · 2020-03-06T20:47:47.789Z · LW(p) · GW(p)
The real-world things we are interested in are abstract, high-level objects. At this point, we don’t even have the mathematical tools to work with these kinds of fuzzy abstractions.
It seems there are cases that are exceptions to this that are easy (easier) to work on. For instance, if you want a verbal/hands-free (as opposed to physical/button or mouse based) interface for 'playing music'.
comment by MSRayne · 2021-06-26T20:32:07.201Z · LW(p) · GW(p)
This actually clarifies for me what my role in life is intended to be: I want to be a human interface. I've always been an extreme generalist, unwilling or unable to specialize in any particular field of study, but enjoying all of them. This produces a certain amount of angst - the modern world does not really have a place for people who do not specialize in anything. But if I think of my specialty as being an interface for all those different fields to talk to one another (which I guess forces me into some kind of management role - and also explains my love of both UX design and systems theory), then that actually could work. I'm not sure how to actually reach the point of being able to do that, though.