What's your big idea?

post by Gordon Seidoh Worley (gworley) · 2019-10-18T15:47:07.389Z · LW · GW · 1 comment

This is a question post.

Contents

  So with that as an example, tell me about your big ideas, past and present.
None
  Answers
    20 johnswentworth
    18 vmsmith
    13 DonyChristie
    12 James_Miller
    8 romeostevensit
    7 maximkazhenkov
    6 MakoYass
    5 Isnasene
    4 ryan_b
    4 alkay
    3 Nicholas Garcia
    2 Stephen James
None
1 comment

At any one time I usually have between 1 and 3 "big ideas" I'm working with. These are generally broad ideas about how some thing works with many implications for how the rest of the whole world works. Some big ideas I've grappled with over the years, in roughy historical order:

I'm sure there are more. Sometimes these big ideas come and go in the course of a week or month: I work the idea out, maybe write about it, and feel it's wrapped up. Other times I grapple with the same idea for years, feeling it has loose ends in my mind that matter and that I need to work out if I'm to understand things adequately enough to help reduce existential risk.

So with that as an example, tell me about your big ideas, past and present.

I kindly ask that if someone answers and you are thinking about commenting, please be nice to them. I'd like this to be a question where people can share even their weirdest, most wrong-on-reflection big ideas if they want to without fear of being downvoted to oblivion or subject to criticism of their reasoning ability. If you have something to say that's negative about someone's big ideas, please be nice and say it as clearly about the idea and not the person (violators will have their comments deleted and possibly banned from commenting on this post or all my posts, so I mean it!).

Answers

answer by johnswentworth · 2019-10-19T01:03:30.372Z · LW(p) · GW(p)

The big three:

  • Scientific progress across a wide variety of fields [LW · GW] is primarily bottlenecked on the lack of a general theory of adaptive systems (i.e. embedded agency [LW · GW])
  • Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale
  • Personally, my own relative advantage in solving technical problems increases with difficulty of the problem across a wide variety of domains

A few sub-big-ideas:

comment by jmh · 2019-10-20T14:13:37.602Z · LW(p) · GW(p)

Regarding economic progress:

  • Solving the coordination problem at scale seems related to my musing (though not new as there is a large literature) about firms and particularly large corporation. Many big corporation seem more suitable to modeling as markets themselves rather than market participants. That seems like it will have significant implications for both standard economic modeling and policy analysis. Kind of goes back to Coase's old article The Nature of the Firm.
  • Given the availability of technology, and how that technology should (and has) reduced costs, why are more developing countries still "developing"? How much of that might be driven more by culture than by cost, access to trade partners, investments, financing or a number of other standard economic explanations?

What we don't understand looks like random noise: Perfect encryption should also look exactly like random noise. Is that perhaps why it seems the universe is so empty of other intelligent life. Clearly there are other explanations for why we might not be able to identify such signals (e.g., syntax, grammar and encoding so alien we are unable to see a pattern, perhaps signal pollution and interference due to all the electromagnetic sources in the univers) but how could we differentiate?

Replies from: johnswentworth
comment by johnswentworth · 2019-10-20T22:42:04.715Z · LW(p) · GW(p)

I would say that perfect encryption is a great example of something we don't understand which looks like noise: at first it looks totally random, but if someone hands you a short key, suddenly it becomes obvious that the "noise" is highly systematic. That's understanding. The problem is that achieving understanding is not always computationally tractable.

comment by romeostevensit · 2019-10-27T02:03:17.775Z · LW(p) · GW(p)

>Economic progress across a wide variety of industries is primarily bottlenecked on coordination problems, so large economic profits primarily flow to people/companies who solve coordination problems at scale

Upstream: setting the ontology that allows interoperability aka computer interface design = largest companies in the world. Hell, you can throw a GUI on IRC and get billions of dollars. That's how early in the game things are.

comment by [deleted] · 2019-10-19T15:48:14.990Z · LW(p) · GW(p)

Have you read any of Cosma Shalizi's stuff on computational mechanics? Seems very related to your interests.

Replies from: johnswentworth
comment by johnswentworth · 2019-10-19T21:55:25.270Z · LW(p) · GW(p)

I had not seen that, thank you.

answer by vmsmith · 2019-10-19T22:41:54.734Z · LW(p) · GW(p)

In October, 1991 an event of such profound importance happened in my life that I wrote the date and time down on a yellow sticky. That yellow sticky has long been lost, but I remember it; it was Thursday, October 17th at 10:22 am. The event was that I had plugged a Hayes modem into my 286 computer and, with a copy of Procomm, logged on to the Internet for the first time. I knew that my life had changed forever.

At about that same time I wanted to upgrade my command line version of Word Perfect to their new GUI version. But the software was something crazy like $495, which I could not afford.

One day I had an idea: "Wouldn't it be cool if you could log on to the Internet and use a word processing program sitting on a main frame or something located somewhere else? Maybe for a tiny fee or something."

I mentioned this to the few friends I knew who were computer geeks, and they all scoffed. They said that software prices would eventually be so inexpensive as to make that idea a complete non-starter.

Well, just look around. How many people are still buying software for their desktops and laptops?

I've had about a dozen somewhat similar ideas over the years (although none of that magnitude). What I came to realize was that if I ever wanted to make anything like that happen, I would need to develop my own technical and related skills.

So I got an MS in Information Systems Development, and a graduate certification in Applied Statistics, and I learned to be an OK R programmer. And I worked in jobs -- e.g., knowledge management -- where I thought I might have more "Ah ha!" ideas.

The idea that eventually emerged -- although not in such an "Ah ha!" fashion -- was that the single biggest challenge in my life, and perhaps most peoples' lives, is the absolute deluge of information out there. And not just out there, but in our heads and in our personal information systems. The word "deluge" doesn't really even begin to describe it.

So the big idea I am working on is what I call the "How To Get There From Here" project. And it's mainly about how to successfully manage the various information and knowledge requirements necessary to accomplish something. This ranges from how to even properly frame the objective to begin with...how to determine the information necessary to accomplish it...how to find that information...how to filter it...how to evaluate it...how to process it...how to properly archive it...etc., etc., etc.

Initially I thought this might end up a long essay. Now it's looking more like a small book. It's very interesting to me because it involves pulling in so many different ideas from so many disparate domains and disciplines -- e.g., library science, decision analysis, behavioral psychology -- and weaving everything together into a cohesive whole.

Anyway, that's the current big idea I'm working on.

comment by romeostevensit · 2019-10-27T02:01:01.132Z · LW(p) · GW(p)

Of ideation, prioritization, and implementation, I agree that prioritization is the most impactful, tractable, and neglected.

comment by alkay · 2019-10-25T23:46:29.919Z · LW(p) · GW(p)

Please see my post below. My current big idea is very similar to yours. I believe we may be able to exchange notes!

Replies from: rick-jones
comment by Rick Jones (rick-jones) · 2019-10-26T05:01:01.320Z · LW(p) · GW(p)

I got your PM. I live in Paris, France. Nonetheless, I would be happy to exchange notes. Can you access my e-mail?

Replies from: alkay
comment by alkay · 2019-10-31T00:28:46.580Z · LW(p) · GW(p)

I unfortunately did not. I am also unable to locate the message I sent you! Maybe its because I am new to this site.

answer by Pee Doom (DonyChristie) · 2019-10-21T04:46:07.420Z · LW(p) · GW(p)

"Let's finish what Engelbart started"

1. Recursively decompose all the problem(s) (prioritizing the bottleneck(s)) behind AI alignment until they are simple and elementary.

2. Get massive 'training data' by solving each of those problems elsewhere, in many contexts, more than we need, until we have asymptotically reached some threshold of deep understanding of that problem. Also collect wealth from solving others' problems. Force multiplication through parallel collaboration, with less mimetic rivalry creating stagnant deadzones of energy.

3. We now have plenty of slack from which to construct Friendly AI assembly lines and allow for deviations in output along the way. No need to wring our hands with doom anymore as though we were balancing on a tightrope.

In the game Factorio, the goal is to build a rocket from many smaller inputs and escape the planet. I know someone who got up to producing 1 rocket/second. Likewise, we should aim much higher so we can meet minimal standards with monstrous reliability rather than scrambling to avoid losing.

See: Ought

answer by James_Miller · 2019-10-19T14:52:31.506Z · LW(p) · GW(p)

We should make thousands of clones of John von Neumann from his DNA. We don't have the technology to do this yet, but the upside benefit would be so huge it would be worth spending a few billion to develop the technology. A big limitation on the historical John von Neumann's productivity was not being able to interact with people of his own capacity. There would be regression to the mean with the clones' IQ, but the clones would have better health care and education than the historical von Neumann did plus the Flynn effect might come into play.

comment by Wei Dai (Wei_Dai) · 2019-10-20T15:38:34.647Z · LW(p) · GW(p)

There was some previous discussion of this idea in Modest Superintelligences [LW · GW] and its comments. I'm guessing nobody is doing it due to a combination of weirdness, political correctness, and short-term thinking. This would require a government effort and no government can spend this much resources on a project that won't have any visible benefits for at least a decade or two, and is also weird and politically incorrect.

comment by Viliam · 2019-10-19T20:31:29.794Z · LW(p) · GW(p)

What exactly is the secret ingredient of "being John von Neumann"? Is it mostly biological, something like unparalleled IQ; or rather a rare combination of very high (but not unparalleled) IQ with very good education?

Because if it's the latter, then you could create a proper learning environment, where only kids with sufficiently high IQ would be allowed. The raw material is out there; you would need volunteers, but a combination of financial incentives and career opportunities could get you some. (The kids would get paid for going there and following the rules. And even if they fail to become JvNs, they would still get great free education, so there is nothing to lose.) Any billionaire could do this as a private project.

(This is in my opinion where organizations like Mensa fail. They collect some potentially good material, but then do nothing about it. It's just "let's get them into the same room, and wait for a miracle to happen", and... surprise, surprise... what happens instead is some silly signaling games, like people giving each other pointless puzzles. An ideal version that I imagine would collect the high-IQ people, offer them free rationality training, and the ones who passed it would be split according to their interests -- math, programing, entrepreneurship... -- and provided coaching. Later, the successful ones would be honor-bound to donate money to organization and provide coaching for the next generation. That is, instead of passively waiting for the miracle to happen, nudge people as hard as you can.)

Replies from: James_Miller
comment by James_Miller · 2019-10-20T01:32:36.461Z · LW(p) · GW(p)

Most likely von Neumann had a combination of (1) lots of additive genes that increased intelligence, (2) few additive genes that reduced intelligence, (3) low mutational load, (4) a rare combination of non-additive genes that increased intelligence (meaning genes with non-linear effects) and (5) lucky brain development. A clone would have the advantages of (1)-(4). While it might in theory be possible to raise IQ by creating the proper learning environment, we have no evidence of having done this so it seems unlikely that this was the cause of von Neumann having high intelligence.

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-20T02:25:39.303Z · LW(p) · GW(p)

I am confused. You might be talking about g, not IQ, since we have very significant evidence that we can raise IQ by creating proper learning environments, given that most psychometrics researchers credit widespread education for a large fraction of the Flynn effect, and generally don't think that genetic changes explain much.

A 2017 survey of 75 experts in the field of intelligence research suggested four key causes of the Flynn effect: Better health, better nutrition, more and better education, and rising standards of living. Genetic changes were seen as not important.[28] The experts' views agreed with an independently performed[29] meta-analysis on published Flynn effect data, except that the latter found life history speed to be the most important factor.[30]
Replies from: James_Miller, Viliam
comment by James_Miller · 2019-10-20T13:34:00.079Z · LW(p) · GW(p)

Yes, I am referring to "IQ" not g because most people do not know what g is. (For other readers ,IQ is the measurement, g is the real thing.) I have looked into IQ research a lot and spoken to a few experts. While genetics likely doesn't play much of a role in the Flynn effect, it plays a huge role in g and IQ. This is established beyond any reasonable doubt. IQ is a very politically sensitive topic and people are not always honest about it. Indeed, some experts admit to other experts that they lie about IQ when discussing IQ in public (Source: my friend and podcasting partner Greg Cochran. The podcast is Future Strategist.). We don't know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores. There is no good evidence that education raises g. The literature on IQ is so massive, and so poisoned by political correctness (and some would claim racism) that it is not possible to resolve the issues you raise by citing literature. If you ask IQ experts why they disagree with other IQ experts they will say that the other experts are idiots/liars/racists/cowards. I interviewed a lot of IQ experts when writing my book Singularity Rising.

Replies from: habryka4, habryka4, habryka4
comment by habryka (habryka4) · 2019-10-20T19:18:57.016Z · LW(p) · GW(p)

To be clear, I think it's very obvious that genetics has a large effect on g. The key question that you seemed to dismiss above is whether education or really any form of training has an additional effect (or more likely, some complicated dynamic with genetics) on g.

And after looking into this question a lot over the past few years, I think the answer is "maybe, probably a bit". The big problem is that for population-wide studies, we can't really get nice data on the effects of education because the Flynn effect is adding a pretty clear positive trend and geographic variance in education levels doesn't really capture what we would naively think as the likely contributors to the observed increase in g.

And you can't do directed interventions because all IQ tests (even very heavily g-loaded ones) are extremely susceptible to training effects, with even just an hour of practicing on Raven's progressive matrices seeming to result in large gains. As such, you can't really use IQ tests as any kind of feedback loop, and almost any real gains will be drowned out by the local training effects.

comment by habryka (habryka4) · 2019-10-20T18:57:17.247Z · LW(p) · GW(p)
(For other readers ,IQ is the measurement, g is the real thing.)

This seems like a misleading summary of what g is.

g is the shared principal component of various subsets of IQ tests. As such, it measures the shared variance between your performance on many different tasks, and so is the thing that we expect to generalize most between different tasks. But in most psychometric contexts I've seen, we split g into 3-5 different components, which tends to add significant additional predictive accuracy (at the cost of simplicity, obviously).

To describe it as "the real thing" requires defining what our goal with IQ testing is. Results on IQ tests have predictive power over income and life-outcomes even beyond the variance that is explained by g, and predictive power over outcomes on a large variety of different tasks beyond only g.

The goal of IQ tests is not to measure g, it isn't even clear whether g is a single thing that can be "measured". The goal of IQ tests historically has been to assess aptitude for various jobs and roles (such as whether you should be admitted to the military, which is where a large fraction of our IQ-score data comes from). For those purposes, we've often found that solely focusing on trying to measure aptitude that generalizes between tasks is a bad idea, since there is still significant task-specific variance that we care about, and would have to give up on measuring in the case of defining g as the ultimate goal of measurement.

comment by habryka (habryka4) · 2019-10-20T19:06:29.770Z · LW(p) · GW(p)
We don't know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores.

I think the Flynn effect has been pretty solidly established, as well as the fact that it has had a significant effect on g.

I do think the most likely explanation of a large fraction of the effect on g is explained via the other factors I cited above, namely better nutrition and more broadly better health-care, resulting in significantly fewer deficiencies.

comment by Viliam · 2019-10-20T14:23:09.000Z · LW(p) · GW(p)

By "g, not IQ" you mean the difference between genotype and phenotype, or something else?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2019-10-20T16:30:19.988Z · LW(p) · GW(p)

The g-factor, or g for short, is the thing that IQ tries to measure.

The name "g factor" comes from the fact that it is a common, general factor which all kinds of intelligence draw upon. For instance, Deary (2001) analyzed an American standardization sample of the WAIS-III intelligence test, and built a model where performance on the 13 subtests was primarily influenced by four group factors, or components of intelligence: verbal comprehension, perceptual organization, working memory, and processing speed. In addition, there was a common g factor that strongly influenced all four.

The model indicated that the variance in g was responsible for 74% of the variance in verbal comprehension, 88% of the variance in perceptual organization, 83% of the variance in working memory, and 61% of the variance in processing speed.

Technically, g is something that is computed from the correlations between various test scores in a given sample, and there's no such thing as the g of any specific individual. The technique doesn't even guarantee that g actually corresponds with any physical quantity, as opposed to something that the method just happened to produce by accident.

So when you want to measure someone's intelligence, you make a lot of people take tests that are known to be strongly g-loaded. That means that the performance on the tests is strongly correlated with g. Then you take their raw scores and standardize them to produce an IQ score, so that if e.g. only 10% of the test-takers got a raw score of X, then anyone getting the raw score of X is assigned an IQ indicating that they're in the top 10% of the population. And although IQ still doesn't tell us what an individual's g score is, it gives us a score that's closely correlated with g.

Replies from: habryka4, Viliam
comment by habryka (habryka4) · 2019-10-20T19:01:59.358Z · LW(p) · GW(p)
The g-factor, or g for short, is the thing that IQ tries to measure.

See my reply above. I think thinking about IQ tests trying to "measure g" is pretty confusing, and while I used to have this view, I updated pretty strongly against it after reading more of the psychometrics literature.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2019-10-20T19:27:27.232Z · LW(p) · GW(p)

Hmm. This interpretation was the impression that I recall getting from reading Jensen's The g Factor, though it's possible that I misremember. Though it's possible that he was arguing that IQ tests should be aiming to measure g, even if they don't necessarily always do, and held the most g-loaded ones as the gold standard.

Replies from: habryka4
comment by habryka (habryka4) · 2019-10-20T19:34:34.206Z · LW(p) · GW(p)

I think it's important to realize that what g is, shifts when you change what subtests your IQ test consists of, and how much "weight" you give to each different result. And as such it isn't itself something that you can easily optimize for.

Like, you always have to define g-loadings with respect to a test battery over which you measure g. And while the correlations between different test-batteries' g's are themselves highly correlated, they are not perfectly correlated, and those correlations do come apart as you optimize for it.

Like, an IQ test with a single task, will obviously find a single g-factor that explains all the variance in the test results.

As such, we need to define a grounding for IQ tests that is about external validity and predictiveness of life-outcomes or outcomes on pre-specified tasks. And then we can analyze the results of those tests and see whether we can uncover any structure, but the tests themselves have to aim to measure something externally valid.

To make this more concrete, the two biggest sources of IQ-test data we have come from american SAT scores, and the norwegian military draft which has an IQ-test component for all males who are above 18 years old since the mid of the 20th century.

The goal of the SAT was to be a measure of scholastic aptitude, as well as a measure of educational outcomes.

The goal of the norwegian military draft test was to be a measure of military aptitude, in particular to screen out people below a certain threshold of intelligence that were unfit for military service and would pose a risk to others, or be a net-drag on the military.

Neither of these are optimized to measure g. But we found that the test results in both of these score batteries are well-explained by a single g-factor. And the fact that whenever we try to measure aptitude on any real-life outcomes, we seem to find a common g-factor, is why we think there is something interesting with g going on in the first place.

comment by Viliam · 2019-10-20T17:57:55.348Z · LW(p) · GW(p)

If "X" is something we don't have a "gears model" of yet, aren't "tests that highly correlate with X" the only way to measure X? Especially when it's not physics.

In other words, why go the extra mile to emphasize that Y is merely the best available method to measure X, but not X itself? Is this a standard way of talking about scientific topics, or is it only used for politically sensitive topics?

Replies from: Kaj_Sotala, habryka4
comment by Kaj_Sotala · 2019-10-20T19:16:48.478Z · LW(p) · GW(p)

Here the situation is different in that it's not just that we don't know how to measure X, but rather the way in which we have derived X means that directly measuring it is impossible even in principle.

That's distinct from something like (say) self-esteem, where it might be the case that we might figure out what self-esteem really means, or at least come up with a satisfactory instrumental definition for it. There's nothing in the normal definition of self-esteem that would make it impossible to measure on an individual level. Not so with g.

Of course, one could come up with a definition for something like "intelligence", and then try to measure that directly - which is what people often do, when they say that "intelligence is what intelligence tests measure". But that's not the same as measuring g.

This matters because it's part of what makes e.g. the Flynn effect so hard to interpret - yes raw test scores on IQ tests have gone up, but have people actually gotten smarter? We can't directly measure g, so a rise alone doesn't yet tell us anything. On the other hand, if people's scores on a test of self-esteem went up over time, then it would be much more straightforward to assume that people's self-esteem has probably actually gone up.

comment by habryka (habryka4) · 2019-10-20T19:14:59.563Z · LW(p) · GW(p)

In this case it's important to emphasize that difference, because a commonly raised hypothesis is that while we can see clear training effects on IQ, none of these effects are on the underlying g-factor, i.e. the gains do not generalize to new tasks. For naive interventions, this has been pretty clearly demonstrated:

IQ scores provide the best general predictor of success in education, job training, and work. However, there are many ways in which IQ scores can be increased, for instance by means of retesting or participation in learning potential training programs. What is the nature of these score gains?
[...]
The meta-analysis of 64 test– retest studies using IQ batteries (total N= 26,990) yielded a correlation between g loadings and score gains of −1.00, meaning there is no g saturation in score gains.
comment by [deleted] · 2019-10-20T16:59:55.307Z · LW(p) · GW(p)

Do you think it would make a big difference though? Isn't it likely that a bunch of John von Neumanns are already running around given the world's population? Aren't we just running out of low-hanging fruits for von Neumanns to pick?

Replies from: James_Miller
comment by James_Miller · 2019-10-20T18:27:42.702Z · LW(p) · GW(p)

While you might be right, it's also possible that von Neumann doesn't have a contemporary peer. Apparently top scientists who knew von Neumann considered von Neumann to be smarter than the other scientists they knew.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2019-10-31T22:02:19.038Z · LW(p) · GW(p)

The world population is larger than it used to be, and far more capable people are able to go to college and grad school than before. I would assume that there are many von Neumanns running around, and in fact there are probably people who are even better running around too.

answer by romeostevensit · 2019-10-18T16:26:35.762Z · LW(p) · GW(p)

The negative principle: it seems like in a huge number of domains people are often defaulting to positivist accounts or representations of things, yet when we look at the history of big ideas in STEM I think we see a lot of progress happening from people thinking about whatever the inverse of the positivist account is. The most famous example I know of is information theory, where Shannon solved a long standing confusion by thinking in terms of uncertainty reduction. I think language tends to be positivist in its habitual forms which is why this is a recurring blind spot.

Levels of abstraction: Korzybski, Marr, etc.

Everything is secretly homeostasis

Modal analysis: what has to be true about the world for a claim to have any meaning at all i.e. what are its commitments

Type systems for uncertainty


answer by [deleted] · 2019-10-20T22:00:11.096Z · LW(p) · GW(p)

A lot of these are quite controversial:

  • AI alignment has failed once before, we are the product
  • Technical obstacles in the way of AGI is our most valuable resource right now, and we're rapidly depleting it
  • A future without superintelligent AI is also dystopian by default (being turned into paperclips doesn't sound so bad by comparison)
  • AI or Moloch, the world will eventually be taken over by something because there is a world to be taken over
  • We were just lucky nuclear weapons didn't turn out to be an existential threat; we might not be so lucky in the future

  • The (observable) universe is tiny on the logarithmic scale
  • Exploration of outer space turned out way less interesting than I imagined
  • Exploration of cyberspace turned out way more interesting than I imagined
  • For an idea to be worthwhile, there needs to be some proportionality between its usefulness and its difficulty of realization (e.g. some god-like powers are easier to achieve than flying cars)
  • The term "nanotechnology" indicates how primitive the field really is; we don't call our every other technology "centitechnology"

  • Human-level intelligence is the lower bound for a technological species
  • Modern humans are surprisingly altruistic given its population size; ours is the age of disequilibrium
  • Technological progress never repeats itself, so neither does history
  • Every social progress is just technological progress in disguise
  • The effect of the bloodiest conflicts in history on world population is.... none whatsoever

  • Schools teach too much, not too little
  • The education system is actually a selection system
  • Innovation, like oil, is a very limited resource and can't be scaled arbitrarily
  • The deafening silence around death by aging
comment by mako yass (MakoYass) · 2019-10-21T01:03:08.816Z · LW(p) · GW(p)

Very few of these are controversial here. The only ones that seem controversial to me are

  • Schools teach too much, not too little

...

That's all, actually. And I'm not even incredulous about that one, just a bit curious.

Although aging and death is terrible, I don't think there's much point in building a movement to stop it. AGI will almost certainly be solved before even half of the processes of aging are.

Replies from: None, TAG
comment by [deleted] · 2019-10-21T03:35:19.230Z · LW(p) · GW(p)

Everyone has his pet subject which he thinks everybody in society ought to know and thus ought to be added to the school curriculum. Here on LessWrong, it tends to be rationality, Bayesian statistics and economics, elsewhere it might be coding, maths, the scientific method, classic literature, history, foreign languages, philosophy, you name it.

And you can always imagine a scenario where one of these things could come in handy. But in terms of what's universally useful, I can hardly think of anything beyond reading/writing and elementary school maths, that's it. It makes no economic sense to drill so much knowledge into people's heads; division of labor is like the whole point of civilization.

It's also morally wrong to put people through needless suffering. School is a waste, or rather theft, of youthful time. I wish I had played more video games and hung out with friends more. I wish I scored lower on all the exams. If your country's children speak 4 languages and rank top 5 in PISA tests, that's nothing to boast about. I waited for the day when all the misery would make sense; that day never came. The same is happening to your kids.

Education is like code - the less the better; strip down to the bare essentials and discard the rest.

Edit: Sorry for the emotion-laden language, the comment turned into a rant half-way through. Just something that has affected me personally.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2019-10-26T23:03:01.451Z · LW(p) · GW(p)

You make a very strong point that I think I can wholly agree with, but I think there is more here we have to examine.

It's sometimes said that the purpose of public education is to create the public good of an informed populace (sometimes, "fascism-resistant voters". A more realpolitic way of putting it is "a population who will perpetuate the state", this is good exactly when the state is good). So they teach us literature and history and hope that this will create a cultural medium whose constituents can communicate well and never repeat their civilization's past mistakes. If it works, the benefits to the commons are immeasurable.

There isn't an obvious upper bound of curriculum size where enriching this commons would necessarily stop being profitable. The returns on sophistication of a well designed interchange system are greater than linear on the specification size of the system.

It might not be well designed. I don't remember seeing anything about economics or law (or even, hell, driving) in the public curriculum, and I think that might be the real problem here. It's not that they teach too much, it's that they don't understand what kind of things a creator of the public good of a good public is supposed to be teaching.

Replies from: None
comment by [deleted] · 2019-10-28T18:53:39.787Z · LW(p) · GW(p)

I disagree on multiple dimensions:

First, let's get disagreements about values out of the way: I hate the term "brainwashing" since it's virtually indistinguishable from "teaching", the only difference being the intent of the speaker (we're teaching our kids liberal democratic values while the other tribe is brainwashing their kids with Marxism). But to the extent "brainwashing" has a useful definition at all, creating "a population who will perpetuate the state" would be it. In my view, if our civilization can't survive without tormenting children with years upon years of conditioning, it probably shouldn't.

Second, I'm very skeptical about this model of a self-perpetuating society. So "they" teach us literature and history? Who's "they"? Group selectionism doesn't work [LW · GW]; there is no reason to assume that memes good at perpetuating themselves would also be good at perpetuating the civilization they find themselves in. I think it's likely that people in charge of framing the school curriculum are biased towards holding those subjects in high regard that they've been taught in school themselves (sunken cost fallacy, prestige signaling), thus becoming vehicles for meme spread. I don't see any incentive for any education board member to stop, think and analyze what will perpetuate the government they're a part of.

I also very much doubt the efficacy of such education/brainwashing at manipulating citizens into perpetuating the state. In my experience, reverse psychology and tribalism are much better methods for this purpose than straightforward indoctrination, particularly with people in their rebellious youth. The classroom, frequently associated with boredom and monotony, is among the worst environments to apply these methods. There is no faster way to create an atheist out of a child than sending him through mandatory Bible study classes; and no faster way to create a libertarian than to make him memorize Das Kapital.

Lastly, the bulk of today's actual school curriculum is neutral with respect to perpetuating our society - maths, physics, chemistry, biology, foreign languages, even most classical literature are apolitical. So even setting the issue of "civilizational propagation" aside, there is still enormous potential for optimization.

comment by TAG · 2019-10-26T13:33:33.153Z · LW(p) · GW(p)

Schools teach too much, not too little

It's hard not to, when you don't know what people are going to end up doing. If you know that the son of the blacksmith is going to be a blacksmith, the problem gets much simpler.

Replies from: None
comment by [deleted] · 2019-10-28T17:44:22.394Z · LW(p) · GW(p)

It's easy to prepare kids to become anything. Just teach what's universally useful.

It's impossible to prepare kids to become everything. Polymaths stopped being viable two centuries ago.

There is a huge difference between union and intersection of sets.

Replies from: TAG
comment by TAG · 2019-10-29T13:43:21.231Z · LW(p) · GW(p)

Just teach what’s universally useful.

Why? It's not obvious that that is better than teaching a bit of everything. For instance, if 10% of jobs need a little bit of geography, then having only candidates who know nothing about geography is going to be a disadvantage to those employers.

Replies from: None
comment by [deleted] · 2019-10-29T15:12:44.933Z · LW(p) · GW(p)

And thus, knowing geography becomes a comparative advantage to those who choose to study it. Why should the rest of us care?

Replies from: TAG
comment by TAG · 2019-10-29T15:54:48.145Z · LW(p) · GW(p)

Because people not knowing geography could be a disadvantage to employERs as well as employees. A minimal education system could be below the economic optimum.

Replies from: None
comment by [deleted] · 2019-10-29T23:34:42.515Z · LW(p) · GW(p)

This is like saying we need the government to mandate apple production, because without apples we might become malnourished which is bad. Why can't the market solve the problem more efficiently? Where's the coordination failure?

Replies from: TAG
comment by TAG · 2019-10-30T10:25:29.492Z · LW(p) · GW(p)

The market can't solve (high school) education because education is mostly public.

answer by mako yass (MakoYass) · 2019-10-21T00:50:48.010Z · LW(p) · GW(p)

My past big ideas mostly resemble yours, so I'll focus on those of my present:

Most economic hardship results from avoidable wars, situations where players must burn resources to signal their strength of desire or power (will). I define Negotiations as processes that reach similar, or better outcomes as their corresponding war. If a viable negotiation process is devised, its parties will generally agree to try to replace the war with it.

Markets for urban land are currently, as far as I can tell, the most harmful avoidable war in existence. Movements in land price fund little useful work[1] and continuously, increasingly diminish the quality of our cities (and so diminish the lives of those who live in cities, which is a lot of people), but they are currently necessary for allocating scarce, central land to high-valuae uses. So, I've been working pretty hard to find an alternate negotiation process for allocating urban land. It's going okay so far. (But I can't bear this out alone. Please contact me if you have skills in numerical modelling, behavioural economics, machine learning and philosophy (well mixed), or any experience in industries related to urban planning)

Bidding wars are a fairly large subclass of avoidable wars. The corresponding negotiation, for an auction, would be for the players to try to measure their wills out of band, then for those found to have the least will to commit to abstaining from the auction. (People would stop running auctions if bidders could coordinate well enough to do this, of course, but I'm not sure how bad a world without auctions would be, I think auctions benefit sellers more than they benefit markets as a whole, most of the time. A market that serves both buyer and seller should generally consider switching to Vickrey Auctions, in the least.)

[1] Regarding intensification; my impression so far is that there is nothing especially natural about land price increase as a promoter of density. It doesn't do the job as fast as we would like it to. The benefits of density go to the commons. Those common benefits of density correlate with the price of the individual dense building, but don't seem to be measured accurately by it.


Another Big Idea is "Average Utilitarianism is more true than Sum Utilitarianism", but I'm not sure whether the world is ready to talk about that. I don't think I've digested it fully yet. I'm not sure that rock needs to be turned over...

I also have a big idea about the evolutionary telos of paraphilias, but it's very hard to talk about.


Oh, this might be important: I studied logic for four years so that I could tell you that there are no fundamental truths, and all math and logic just consists of a machine that we evolved and maintained just because it happened to work. There's no transcendent beauty at the bottom of it all, it's all generally kind of ugly even after we've cut the ugliest parts away, and there may be better alternatives (consider CDT and FDT for an example of a deposition of seemingly fundamental elegance)

comment by cousin_it · 2019-10-21T13:59:43.143Z · LW(p) · GW(p)

The usual Georgist story is that the problem of allocating land can be solved by taxing away all unimproved value of land (or equivalently by the government owning all land and renting it out to the highest bidder), and that won't distort the economy, but the people who profit from current land allocation are disproportionately powerful and will block this proposal. Is that related to the problem you're trying to solve?

Replies from: MakoYass
comment by mako yass (MakoYass) · 2019-10-22T02:22:42.420Z · LW(p) · GW(p)

Yeah. "Replace the default beneficiaries of avoidable wars with good people who use the money for good things" is a useful civic method to bear in mind but probably far from ideal. Taxation is fine, you need to do it to fund the commons, but avoidable wars seems like a weird place to draw taxes from, which nobody would consciously design? Taxes that would slow down urbanisation (by making the state complicit in increases in urban land price/costs of urban services) sound like a real bad idea.

My proposed method is, roughly, using a sort of reciprocal, egalitarian utilitarianism to figure out a good way to arrange everyone who owns a share in the city (shares will cost about what it costs to construct an apartment. Maybe different entry prices for different apartment classes.. although the cost of larger apartment tickets will have to take into account the commons costs that lower housing density imposes on the labour market), and to grant leases to their desired businesses/services. There shall be many difficulties along the way but I have not hit a wall yet.

Replies from: cousin_it
comment by cousin_it · 2019-10-22T06:43:14.215Z · LW(p) · GW(p)

Taxes that would slow down urbanisation (by making the state complicit in increases in urban land price/costs of urban services) sound like a real bad idea.

AFAIK the claim is that taxing land value would lead to lower rents overall, not higher. There's some econ reasoning behind that.

comment by romeostevensit · 2019-10-27T02:13:54.584Z · LW(p) · GW(p)

I don't think this is addressable because of the taboo tradeoffs in current culture around money and class. Some people produce more negative externalities than others in ways our legal system can not address, therefore people sequester themselves via money gating since that is still acceptable in practice even though it is decried explicitly.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2019-10-27T04:16:58.473Z · LW(p) · GW(p)

What negative externalities are you thinking of. Maybe it's silly for me to ask you to say, if you're saying they're taboo, but I'm looking over all of the elitist taboos and I don't think any of them really raise much of an issue.

Did I mention that my prototype aggregate utility function only regards adjacency desires that are reciprocated. For instance, if a large but obnoxious fan-base all wanted to be next to a single celebrity author who mostly holds them all in contempt, the system basically ignores those connections. Mathematically, it's like, the payoff of positioning a and b close together is min(a.desireToBeNear(b), b.desireToBeNear(a)). The default value for desireToBeNear is zero.

P.S. Does the fact that each user desire expression (roughly, the individual utility function) gets evaluated in a complex way that depends on how it relates to the other desire expressions make this not utilitarianism? Does this position that fitting our desires together will be more complex than mere addition have a name?

Replies from: romeostevensit
answer by Isnasene · 2019-10-19T04:32:37.078Z · LW(p) · GW(p)

One thing I'm thinking about these days:

Often times, when people make decisions, they don't explicitly model how they themselves will respnd to the outcomes; they instead use simplified models of themselves to quickly make guesses about the things that they like. These guesses can often act as placebos which turn the expected benefits of a given decision into actual benefits solely by virtue of the expectation [LW · GW]. In short if you have the psychological architecture that makes it physically feasible to experience a benefit, you can hack your simplified models of yourself to make yourself get that benefit.

This isn't quite a dark art of rationality [LW · GW] since it does not need to actually hurt your epistemology but it does leverage the possibility of changing who you are (or more explicitly, changing who you are by changing who you think you are). I'm currently using this as a way to make myself into the kind of person who is a writer.


answer by ryan_b · 2019-10-28T19:04:22.706Z · LW(p) · GW(p)

Humans prefer mutual information. Further, I suspect that this is the same mechanism that drives our desire to reproduce.

The core of my intuition is that we instinctively want to propagate our genetic information, and also seem to want to propagate our cultural information (e.g. the notion of not being able to raise my daughter fills me with horror). If this is true of both kinds of information, it probably shares a cause.

This seems to have explanatory power for a lot of things.

  • Why do people continue to talk when they have nothing to say, or spend time listening to things that make them angry or afraid? Because there are intrinsic rewards for speaking and for listening, regardless of content. These things lead to shared information the same way sex leads to children.
  • Why do people make poetry and music? Because this is a bundle of their cultural information propagating in the world. I think the metaphor about the artwork being the artist's child should be taken completely literally.
  • Why do people teach? A pretty good description of teaching is mutualizing information.

This quickly condensed into considering how important shared experiences are, and therefore also coordinated groups. This is because actions generate shared experiences, which contain a lot of mutual information. Areas of investigation for this include military training, asabiyah, and ritual.

What I haven't done yet is really link this to what is happening in the brain; naively it seems consistent at first blush with the predictive processing model, and also seems like maybe-possibly Fristonian free energy applied to other humans.

answer by alkay · 2019-10-26T00:16:44.776Z · LW(p) · GW(p)

We experience and learn so many things over years. However, our memories may fail us. They fail in recalling a relevant fact that could have been very useful for accomplishing an immediate task at hand. e.g. My car tire has punctured on a busy street, but I cannot recall how to change it -- though I remember reading about it in the manual.

It is likely that the memory is still alive somewhere in the deep corner of my brain. In this case, I maybe able to think hard and push myself to remember it. Such a process is bound to be slow and people on the street would yell at me for blocking it!

Sometimes our memories fail us "silently". We don't know that somewhere in our brain is information we can bring to bear on accomplishing a task on hand. What if I don't even know that I have read a manual on changing car tires?!

Long term memory accessibility is thus an issue.

Now our short term memory is also very very limited (4-7 chunks at a time). In fact, short-cache of working memory might be a barrier to [LW · GW] intellectual progress. It is then very crucial to inject relevant information in this limited working=memory space if we are to give a task our best, most intelligent shot.

Thus, I think about memory systems that can artificially augment the brain. I think of them from point of view of storing more information and indexing it better. I think of them for faster and more relevant retrieval.

I think of them as importable and exportable -- I can share them with my friends (and learn how to change tires instantaneously). A pensieve like memory bank.

I thus think of "digital memories" that augment our relatively superior and creative compute brain processes. That is my (current) big idea.

comment by [deleted] · 2019-10-28T17:35:01.165Z · LW(p) · GW(p)

This is basically the long-term goal of Neuralink as stated by Elon Musk. I am however very skeptical because of two reasons:

  • Natural selection did not design brains to be end-user modifiable. Even if you could accurately monitor every single neuron in a brain in real-time, how would you interpret your observations and interface with it? You'd have to build a translator by correlating these neuron firing patterns with observed behaviors, which seems extremely intractable
  • In what way would such a brain-augmenting external memory be superior to pen and paper? Pen and paper already allows me to accomplish working-memory limited tasks such as multiplication of large numbers, and I'm neither constrained by storage space (I will run out of patience before I run out of paper) nor by bandwidth of the interface (most time is spent on computing what to write down, not writing itself)

It seems there is an extreme disproportionality between the difficulty of the problem and the value of solving it.

Replies from: alkay
comment by alkay · 2019-10-30T00:33:22.950Z · LW(p) · GW(p)

I agree with you, I too am skeptical about Neuralink being useful anytime soon.

The augmentation in my vision, at the beginning at least, is external. I don't attempt to modify the brain. I externally "record" a persons life. A simple manifestation of such a augment would be a wearable device: Google Glass.

It follows you around and forms "memories". This external augmentation, then is able to store, index and retrieve relevant memory at scale, with speed, and aid brains normal abilities.

Hopefully its easy to see that such an external augmentation is better than pen-and-paper based memory system.





answer by Nicholas Garcia · 2019-10-31T03:24:44.490Z · LW(p) · GW(p)

Nearly all education should be funded by income sharing agreements.

E1 = student's expected income without the credential / training (for the next n years).

E2 = student's expected income with the credentia / training (over the next n years). Machine learning can estimate this separately for each student.

C = cost of the program

R = Percent of income above E1 that student must pay back = (E2-E1)/C

Give students a list of majors / courses / coaches / apprenticeships, etc. with an estimate of expected income E2 and rate of repayment R.

Benefits:

  • This will seamlessly sort students into programs that actually benefit them.
  • Programs that lie or misestimate their own value will be bankrupted (instead of saddling the student with debt). Schools must maximize effectiveness, not merely enrollment (the current model).
  • There would be zero financial barriers to entry for poorer students, which is equivalent to Bernie's "free college", except you get nudged toward training that is actually useful instead of easy or entertaining. Also, this could be achieved without raising taxes one iota.
  • If "n years" is long, then schools will optimize for lifetime earnings, not just "get a job now". This could incentivize schools to invest in lifelong learning, networking, etc.

Obviously, rich students could still pay out of pocket up front (since they are nearly guaranteed a high income, they might not want to give a percent away).

comment by Matthew Barnett (matthew-barnett) · 2019-10-31T03:41:02.667Z · LW(p) · GW(p)

I like this idea, but I'm still pretty negative about the entire idea of college as a job-training experience, and I'm worried that this proposal doesn't really address what I see as a key concern with that framework.

I agree with Bryan Caplan that the reason why people go to college is mainly to signal their abilities. However, it's an expensive signal -- one that could be better served by just getting a job and using the job to signal to future employers instead. Plus, then there would be fewer costs on the individual if they did that, and less 'exploitation' via lock-in (which is what this proposed system kind of does).

The reason why people don't just get a job out of high school is still unclear to me, but this is perhaps the best explanation: employers are just very skeptical of anyone without a college degree being high quality, which makes getting one nearly a requirement even for entry level roles. Hypothetically however, if you can pass the initial barrier and get 4 years worth of working experience right out of high school, that would be better than going to college (from a job-training and perhaps even signaling perspective).

Unfortunately, this proposal, while much better than the status quo, would perpetuate the notion that colleges are job-training grounds. In my opinion, this notion wastes nearly everyone's time. I think the right thing to do might be to combine this proposal with Caplan's by just eliminating subsidies to college.

Replies from: nicholas-garcia
comment by Nicholas Garcia (nicholas-garcia) · 2019-10-31T21:19:16.764Z · LW(p) · GW(p)

Can you explain what you mean by the problem of job training?

You mean job vs. career vs. calling?

If by "job training" you mean maximizing short-run over long-run earnings, I agree with you. But for that reason, if you move the "slider" toward a longer payoff period, then the schools will be incentivized to teach more fundamental skills, not short-term "job training".

On the other hand, sometimes people just need to get their foot in the door to get up and running. As they accumulate savings, on the job experience, professional networks, etc. even a good "first job" can give a lifetime boost.

A lot of people I grew up with have the "cold start" or "failure to launch" problem, where they never get into a good-enough paying job and just spin their wheels as the years go by, never gaining traction. For them even getting a foot in the door will get the ball rolling.

comment by cousin_it · 2019-10-31T09:53:41.672Z · LW(p) · GW(p)
answer by Stephen James · 2019-10-27T03:40:28.349Z · LW(p) · GW(p)

I tend to keep three on mind and in rotation, as they move from "under inspection" to "done for now" and all the gradations between. In the past, this has included the likes of:

  • the validity of reverse chronological time travel ("done for now" back in 2010)
  • predictability of interpersonal interactions ("done for now" as of Spring 2017)
  • how to reject advice, while not alienating the caring individuals that provide advice (on hold)

Currently I'm working on:

  • How and Why are people presenting themselves as so divided in current conversations?
    • Yes, Politics is the Mind Killer. Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.
    • Maybe there's a Sequence to talk me out of it?
  • The Mathematical Legitimacy of Machine Learning (convex optimization of randomly initialized matrices whose products fit curves in n-dimensional space)
    • Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.
    • While not a mathematician myself, I have spoken with a few mathematicians who've validated my opinions (after examining the literature), and am currently seeking training to become such.
  • How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

The last of those has been in the works for the longest and current evidence (anecdotal and journal studies) suggests to me that we researching "apathy for self-betterment" are looking too high up the abstraction ladder. So it's time to dig a little deeper

comment by [deleted] · 2019-10-28T16:59:29.953Z · LW(p) · GW(p)
Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

Ethics aside, this seems to be a tall order. You're basically trying to hack into someone else's mind through very limited input channels (speech/text). In my experience it's never a lack of knowledge that's hindering people from overcoming akrasia (also the reason I'm skeptical towards the efficacy of self-help books).

Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.

That's a very good point. In ML courses lots of time is spent on introducing different network types and technical details of calculus/linear algebra, without explaining why to pick out neural networks from idea space in the first place beyond hand-waving that it's "biologically inspired".

Replies from: stevefox
comment by Stephen James (stevefox) · 2019-11-04T03:00:41.801Z · LW(p) · GW(p)

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

Perhaps I didn't give enough detail. I definitely don't want to drive others exclusively into what I would like them to be. Nor do I want people to believe as I do in most regards. There's a greater principle that I think would make the world a better place:

When I engage with someone who presents themselves as opposed to an entire Other group, they tend to (in one way or another) divulge their assumption for opposing/hating/rebuking/etc that group. Very rarely do they have a complex enemy. The ethical ground I stand on is one of seeking to build bridges of understanding to those whom one claims to oppose, that will be readily crossed. My hope is that, with time, the "I'm anti-XYZ" or "I'm pro-ABC" won't be necessary because we'll be willing to consider people as fellow humans. We won't seek to make them a low-resolution representation of one sliver of their identity. We will, hopefully, face our opposition with eyes wide open, Bayesian "self-updaters" at the ready.

You're basically trying to hack into someone else's mind through very limited input channels (speech/text).

Again, I may have put incorrect emphasis or perhaps you are perceptive of the ways ideas can turn dangerous. Either way, I thank you for helping me relate these ideas.

I want to teach what I uncover because I think there is a limited impact to whatever sweet truths I glean from the universe if they stay strictly inside my head. Part of this goal is acquiring new teaching abilities, such as the ability to custom-fit my conveyance of material to the audience and dynamically ("real-time") adjust delivery based on reception.

In my experience it's never a lack of knowledge that's hindering people from overcoming akrasia (also the reason I'm skeptical towards the efficacy of self-help books).

This is exactly the point of that idea: just having the information doesn't seem to be enough. But for me, the knowledge seems more than enough for many applications. I want to

  1. extract what ever that is
  2. figure out how to apply it in the domains where - for myself - "cold-turkey" doesn't seem to do it,
  3. distill it, and
  4. share what's distilled.

Enabling the sincere dropping of bad habits strikes me as "for the good".

For example, it would be great if I could switch-off the processes that allow me to easily generate resentment for my spouse. It would be even better if I could flip the switch like I dropped hot showers, or the belief that the runtime complexity of the "power" function was constant-time (rather than the correct logarithmic-time).

There are possible ways of using this ability for ill. There would need to be controlled experiments if the tool is even extricable. There get to be a lot of conjunctions, so it's of a lesser concern for the near-term.

1 comment

Comments sorted by top scores.

comment by GeneSmith · 2022-03-13T22:05:21.973Z · LW(p) · GW(p)

I just stumbled across this post 3 years later and... wow. A lot of these comments seem like a treasure trove of stuff to read.