Do the best ideas float to the top?
post by Quinn (quinn-dougherty) · 2019-01-21T05:22:51.182Z · LW · GW · 11 commentsContents
It may depend on what we mean by “best”. Intuition Let’s force an empirical interpretation of “the best ideas float to the top” density_1: The Simplest Ideas Float to the Top. density_2: The Truest Ideas Float to the Top. Negative attention tax Individuals apply NAT credits to interesting-looking complicated ideas, complicated ideas aren't directly supplied with these supplements in the way that simple ideas are automatically handicapped. None 11 comments
It may depend on what we mean by “best”.
Epistemic status: I understand very little of anything.
Speculation about potential applications: regulating a logical prediction market, e.g. logical induction; constructing judges or competitors in e.g. alignment by debate; designing communication technology, e.g. to mitigate harms and risks of information warfare.
The slogan “the best ideas float to the top” is often used in social contexts. The saying goes, “in a free market of ideas, the best ideas float to the top”. Of course, it is not intended as a facts statement, as in “we have observed that this is the case”; it is instead a values statement, as in “we would prefer for this to be the case.”.
In this essay, however, we will force an empirical interpretation, just to see what happens. I will provide three ways to consider the density of an idea, or the number assigned to how float-to-the-top an idea is.
In brief, an idea is a sentence, and you can vary the amount of it’s antecedent graph (like in bayesian nets, NARS-like architectures) or function out of which it is printed (like in compression) that you want to consider at a given moment, up to resource allocation. This isn’t an entirely mathematical paper, so don’t worry about WFFs, parsers, etc., which is why i’ll stick with “ideas” instead of “sentences”. I will also be handwaving between "description of some world states" and "belief about how world states relate to eachother".
Intuition
Suppose you observe wearers of teal hats advocate for policy A, but you don’t know what A is. You’re minding your business in an applebees parking lot when a wearer of magenta hats gets your attention to tell you “A is harmful”. There are two cases:
- Suppose A is “kicking puppies”, (and I don’t mean the wearer of magenta hats is misleadingly compressing A to you, I mean the policy is literally kicking puppies). The inferential gap between you and the magentas can be closed very cheaply, so you’re quickly convinced that A is harmful (unless you believe that kicking puppies is good).
- Suppose A is “fleegan at a rate of flargen”, where fleeganomics is a niche technical subject which nevertheless can be learned by anyone of median education in N units[^1] or less. Suppose also that you know the value of N, but you’re not inclined to invest that much compute in a dumb election, so you either a. take them at their word that A is harmful; b. search the applebees for an authority figure who believes that A is harmful, but believes it more credibly; or c. leave the parking lot without updating in any direction.
“That’s easy, c” you respond, blindingly fast. You peel out of there, and the whole affair makes not a dent in your epistemic hygiene. But you left behind many others. Will they be as strong, as wise as you?
“In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”
– Herbert Simon
Let’s call case 1 “constant” and case 2 “linear”. We assume that constant refers to negligible cost, and that linear is in pedagogical length (where pedagogical cost is some measure of the resources needed to acquire some sort of understanding).
A regulator, unlike you, isn’t willing to leave anyone behind for evangelists and pundits to prey on. This is the role I’m assuming for this essay. I will ultimately propose a negative attentional tax, in which the constant cases would be penalized to give the linear cases a boost. (It’s like negative income tax, replacing money with attention).
If you could understand fleeganomics in N/100000 bits, would it be worth it to you then?
Let’s force an empirical interpretation of “the best ideas float to the top”
Three possible measures of density:
- the simplest ideas float to the top.
- the truest ideas float to the top.
- the ideas which advance the best values float to the top, where by “advance the best values” we mean either a. maximize my utility function, not yours; or b. maximize the aggregate/average utility function of all moral patients, without emphasis on zero-sum faceoffs between opponent utility functions.
Each in turn implies a sort of world in which it is the sole interpretation, and thus the sole factor over beliefs of truth-seekers.
The intuition given above leans heavily on density_3, however, we must start much lower, at the fundamentals of simplicity and truth. From now on, for brevity’s sake, please ignore density_3 and focus on the first two.
density_1: The Simplest Ideas Float to the Top.
If you form a heuristic by philosophizing the conjunction rule in probability theory, you get Occam’s razor. In machine learning, we have model selection methods that directly penalize complexity. Occam’s razor doesn’t say anything about reception of ideas in a social system, beyond implying that in gambling the wise bet on shorter sentences (insofar as the wise are gamblers).
If we assume that the wearer of magenta hats is maximizing something like petition signatures, and by proxy maximizing the number of applebees patrons converted to magenta hat wearing applebees patrons, then in the world of density_1 they ought to persuade only via statements with constant or negligible cost. (remember, in the world of density_1, statement’s needn’t have any particular content to be successful. In an idealized setting, this would mean the empty string gets 100% of the vote in every election, or 100% of traders purchase nothing but the empty string, etc.; in a human setting, think of the “smallest recognizable belief”).
density_2: The Truest Ideas Float to the Top.
If the truest ideas floated to the top, then statements with more substantial truth values (i.e. with more evidence, more compelling evidence, stronger inferential steps) win out against those with less substantial truth values. In a world governed only by density_2, all cost is negligible.
In this world, the wearer of magenta hats is incentivized to teach fleaganomics – to bother themselves (and others) with linear cost ideas – if that’s what leads people to more substantially held beliefs or commitments. This is a sort of oracle world, in a word, logical omniscience.
In a market view, truth only prevails in the long run (i.e. in the same way that price only converges to value but you can’t pinpoint when they’re equal, supply with demand, etc.), which is why the density_2 interpretation is suitable for oracles, or at least the infinite resources of AIXI-like agents. If you tried to populate the world of density_2 with logically uncertain/AIKR-abiding agents, the entire appeal of markets evaporates. “Those who know they are not relevant experts shut up, and those who do not know this eventually lose their money, and then shut up.” (Hanson), but without the “eventually”.
Negative attention tax
Now suppose we live in some world where density_1 and density_2 are operating at the same time, with some foggier and handwavier things like density_3 on the margins. In such a world, we say false-complicated ideas are robustly uncompetitive and true-simple ideas are robustly competitive, where “robust” means “resilient to foul play”, and "foul play" means "any misleading compression, fallacious reasoning, etc.". Without such resilience, we have risk that false-simple ideas will succeed and true-complicated ideas will fail.
A regulator isn’t willing to leave anyone behind for evangelists and pundits to prey on.
Perhaps we want free attention distributed to true-but-complicated things, and penalties applied to false-but-simple things. In economics, a negative income tax (NIT) is a welfare system within an income tax where people earning below a certain amount receive supplemental pay from the government instead of paying taxes to the government.
For us, a negative attentional tax is a welfare system, where ideas demanding above a certain amount of compute receive supplemental attention, and ideas below that amount pay up.
density_2 \ density_1 | Simple | Complicated |
---|---|---|
False | I'm saying this is a failure mode, danger zone, etc. | Robustly uncompetitive (won’t bother us) |
True | Robustly competitive (these will be fine) | I’m saying the solution is to give these sentences a boost. |
An example implementation: suppose I’m working at nosebook in year of our lord. When I notice certain posts get liked/shared blindingly fast, and others take more time, I suppose that the simple ones are some form of epistemic foul play, and the complicated ones are more likely to align with epistemic norms we prefer. I make an algorithm to suppress posts that get liked/shared too quickly, and replace their spots in the feed with posts that seem to be digested before getting liked/shared (disclaimer: this is not a resilient proposal, I spent all of 10 seconds thinking about it, please defer to your nearest misinformation expert)
Individuals apply NAT credits to interesting-looking complicated ideas, complicated ideas aren't directly supplied with these supplements in the way that simple ideas are automatically handicapped.
Though the above may be a valid interpretation, especially in the nosebook example, NAT is more properly understood as credits allocated to individuals for them to spend freely.
You can imagine the stump speech.
extremely campaigning voice: I’m going to make sure every member of every applebees parking lot has a lengthened/handicapped mental speed when they’re faced with simple ideas, and this will come back to them as tax credits they can spend on complicated ideas. Every applebees patron deserves complexity, even if they can’t afford the full compute/price for it.
--footnote-- [^1]: "Pedagogical cost" is loosely inspired by "algorithmic decomposition" in Between Saying and Doing. TLDR., to reason about a student acquiring long division, we reason about their acquisition of subtraction and multiplication. For us, pedagogical cost or length of some capacity is the sum of the length of its prerequisite capacities. We'll consider our pedagogical units as some function on attentional units. Herbert Simon dismisses adopting Shannon's bit as the attentional unit, because he wants something invariant under different encoding choices. He goes on to suggest time in the form of "how long it takes for the median human cognition to digest". This can be our base unit of parsing things you already know how to parse, even though extending it to pedagogical cost wouldn't be as stable because we don't understand teaching or learning very well.
11 comments
Comments sorted by top scores.
comment by Benquo · 2019-01-22T17:51:56.818Z · LW(p) · GW(p)
Optimizing for Stories [LW · GW] seems like an exploration of a strongly related topic, and it would be nice to get the OP’s perspective on it.
Replies from: quinn-dougherty, quinn-dougherty↑ comment by Quinn (quinn-dougherty) · 2019-01-22T22:00:47.677Z · LW(p) · GW(p)
This one had slipped by me, so thanks for pointing me to it. It'll take me at least a week to read and digest. I'll add a comment here (eventually) if I have anything to say.
↑ comment by Quinn (quinn-dougherty) · 2019-02-24T16:42:18.867Z · LW(p) · GW(p)
IMO, this is what I briefly suggested by linking to Scott's Against Murderism with the words "misleading compression", i.e., I think describing a policy as murderistic and optimizing for stories are each instances of misleading compression.
If it’s only stories which matter, yet you split your efforts between stories and reality, then you will likely be outcompeted by someone who spent all of their resources on crafting good stories.
This is 100% what I find alarming about misinformation (both the malicious kind and the emergent/inadequate kind), and I don't know a reason why alignment via debate would be resilient.
comment by Matt Goldenberg (mr-hire) · 2019-01-21T11:07:04.517Z · LW(p) · GW(p)
I found it curious that there wasn't a single mention of memetics in this essay. It made me think that maybe you've started to evolve some similar ideas without knowing there was a prior field (now basically defunct as it's own field, as the sole journal shut down) developed to study this question.
The basic answer that memetics gives is that "the ideas most likely to support their own survival and replication float to the top." Simplicity here is one factor that matters, as is truth, as is matching the values of the average person, to the extent that these factors support survival and replication.
For instance, "Having children will bring you joy" is a common example in memetics. The idea may not be any more simple, truthful, or value giving than other similar beliefs about what having children will bring you, but it will tend to propagate because the people who believe it have more children to pass the idea on to.
Replies from: Aiyen, quinn-dougherty↑ comment by Aiyen · 2019-01-21T15:32:39.425Z · LW(p) · GW(p)
This. Also, political factors-ideas that boost the status of your tribe are likely to be very competitive independently of truth and nearly so of complexity (though if they're too complex one would expect to see simplified versions propagating as well).
↑ comment by Quinn (quinn-dougherty) · 2019-01-22T21:57:38.154Z · LW(p) · GW(p)
I don't know a lot about evolution, but I suspect any benefits of building on memetics work directly would fall under the umbrella of "what about when we're tipping the scale in favor of some ingroup?". I defined density_3 as a placeholder for this along with all maximization related issues, and then said "we'll ignore this for now and focus on more basic foundations". I don't know if I'll return to it, but if I do, it'll take me a really really long time.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-01-24T11:53:11.220Z · LW(p) · GW(p)
The thing I was trying to point at is that memetics IS the basic foundations. All three of the items you mentioned are a side effect of survival and replication characteristics, not something that underlie them.
It may be that the work you're trying to do here has already been done.
Replies from: quinn-dougherty↑ comment by Quinn (quinn-dougherty) · 2019-02-24T16:25:46.303Z · LW(p) · GW(p)
Sorry. The point was NAT, density_{1,2,3}
was devised scaffolding for the MVB (minimum viable blogpost). I imagine that NAT has already been discovered, discussed, problematized etc. somewhere but I couldn't find it. I have a background assumption that attention economists are competent and well-intentioned people, so I trust that they have the situation under control.
comment by BurntVictory · 2019-01-22T22:16:51.386Z · LW(p) · GW(p)
I liked the playful writing here.
Maybe I'm being dumb, but I feel like spelling out some of your ideas would have been useful. (Or maybe you're just playing with ~pre-rigor intuitions, and I'm overthinking this.)
I think "float to the top" could plausibly mean:
A. In practice, human nature biases us towards treating these ideas as if they were true.
B. Ideal reasoning implies that these ideas should be treated as if they were true.
C. By postulate, these ideas end up reaching fixation in society. [Which then implies things about what members of society can and can't recognize, e.g. the existence of AIXI-like actors.]
Likewise, what level do you want a NAT to be implemented at? Personal behavior? Structure of group blog sites? Social norms?
Replies from: quinn-dougherty↑ comment by Quinn (quinn-dougherty) · 2019-02-24T16:17:50.124Z · LW(p) · GW(p)
thanks for your comment.
Likewise, what level do you want a NAT to be implemented at? Personal behavior? Structure of group blog sites? Social norms?
-
personal behavior: probably not viable without a dystopian regime of microchips embedded into brains.
-
structure of group blog sites: maybe-- these things have been suggested and tried, i.e. I can't tell you how many times I've seen a reddit comment lamenting the incentives of their upvote system.
-
weirdly, I found out about the Brave browser last week (weird because it's apparently been around for a while): attempting to overthrow advertising with an attention-measuring coin. This is great news!
-
I was thinking a lot about NAT reading this paper. In the context of debate judges, NAT is a bit of a "last minute jerry-rig / frantically shore up the levy" solution, something engineers would stumble upon in an elaborate and convoluted debugging process--- the exact opposite of the kind of the solutions alignment researchers are interested in.
if an AI comes to you and says, “I would like to design the particle accelerator this way because,” and then makes to you an inscrutable argument about physics, you’re faced with this tough choice. You can either sign off on that decision and see if it has good consequences, or you can be like, no, don’t do that ’cause I don’t understand it. -- Paul Christiano
- Tim Wu's "Is the first amendment obsolete?" is important and I think everybody should read it.
comment by Jay Molstad (jay-molstad) · 2019-01-22T01:18:45.097Z · LW(p) · GW(p)
I'm pretty sure the negative attention tax is called "school". Society has a set of things it wants us to know (called a "curriculum") and forces kids to sit in chairs while the things are slowly explained. They can't actually make anyone pay attention, though.