Posts
Comments
Good article. I would advise less emphasis on traditional schooling (reading, writing, 'rithmetic) and more emphasis on relationship intelligence and embodied intelligence (making things with your hands).
To me the most important graph is the one that shows both mothers and fathers started spending much more time on child-care in the 90s. What the heck happened? Did children suddenly become that much more difficult to manage? If kids really consume that much time and effort, it's no wonder that people don't want to have kids - it's too much damn work!
The Japanese value stability much, much more than Americans. This harms their economy in various ways:
- Firms that are losing money are kept alive
- Employees that underperform are not fired
- Seniority is prized much more than talent (if someone is promoted higher due to talent, other employees will be unhappy)
- Customers continue to patronize their known vendors due to loyalty, even if those vendors aren't very good
- The idea of "disruption" - the holy grail of Silicon Valley - is anathema to Japanese.
How much did the supposedly severe decline in Google's organizational health contribute to your decision to change jobs?
Defined benefit pension schemes like Social Security are grotesquely racist and sexist, because of life expectancy differences between demographic groups.
African American males have a life expectancy of about 73 years, while Asian American females can expect to live 89 years. The percentage difference between those numbers may not seem that large, but it means that the latter group gets 24 years of pension payouts (assuming a retirement age of 65), while the former gets only 8, a 3x difference. So if you look at a black man and an Asian woman who have the exact same career trajectory, SS pay-ins, and retirement date, the latter will receive a 3x greater benefit than the former.
Another way of seeing this fact is to imagine what would happen if SSA kept separate accounting buckets for each group. Since the life expectancy for black men is much lower, they will receive a significant benefit (either lower payments or higher payouts) from the creation of this barrier.
Defined-benefit schemes add insult to injury. The injury is that some groups have shorter lives. The insult is that the government forces them to subsidize the retirement of longer-lived groups.
In general, anytime you see a hardcoded age-of-retirement number in the tax system or entitlement system, the underlying ethics is questionable. Medicare kicks in at 65, which means that some groups get a much greater duration of government-supported healthcare.
Judging by the hammering that Meta's stock has taken over the last 5 years, the market really disagrees with you.
Here's an argument against radical VR transformation in the near term: some significant proportion of people have a strong anti-VR aversion. But the benefit of VR for meetings has strong network effects: if you have 6 friends you want to meet with, but 2 out of the 6 hates VR, that's going to derail the benefit of VR for the whole group.
The situation is not ‘handled.’ Elites have lost all credibility.
I think it's worth caveating this that not all elites have lost credibility. Elites in places like Singapore, Switzerland, and Finland have a lot of credibility.
Two possibilities:
- The company figured out that men consume less health care than women, and adjusted their salaries to account for it
- The company has a defined-benefit pension plan, and realized that this costs more to provide to women because women live longer. So they adjusted salaries to account for the difference in costs
I don't buy the housing cost / homelessness causation. There are many poor cities in the US that have both low housing costs and high homelessness. This page mentions Turlock, CA, Stockton, CA, and Springfield, MA as among the top 15 places with the highest homelessness rates; a quick Zillow search indicates they all have a fair bit of cheap housing.
The relationship between homelessness and state-wide housing costs is probably caused by a latent variable: degree of urbanization. Cities are both more expensive and have more homelessness, and states vary widely along the urban/rural dimension.
You also missed a strong countervailing factor which would tend to reduce SF's homelessness: demographics. SF is has fewer blacks than the nation as a whole, and blacks are more likely to be homeless. SF is also disproportionately Asian, and Asians are much less likely to be homeless.
I think SF's homelessness problem is caused by a very simple reason: SF is a relatively pleasant place to be a street person. This is partially because of the weather, as you mentioned, but also because the city is quite tolerant of the homeless population and has a lot of services for them.
Copied from a previous comment on Hacker News
I wish you well and I hope you win (ed, here I mean I hope the proposal is approved)
I am pessimistic though. I don't think people really understand how much current homeowners do not want additional housing to be built. It makes sense if you consider that the net worth of a typical homeowner is very substantially made up of a highly leveraged long position in real estate. If that position goes south - because of an increase in housing supply, or because of undesirable new people moving into the neighborhood - the homeowner's net worth could be decimated.
Now, most people will not come out and say directly that they are opposed to new housing for the obvious economic reason, because they don't want to seem selfish and greedy and maybe racist. So they have to find a socially acceptable cover story to oppose new housing - environmentalism, concerns about safety, etc etc.
End Social Security and Other Defined-Benefit Pension Schemes They are intrinsically racist and sexist.
Consider two people, Alice and Bob. Alice is an Asian-American female, while Bob is an African-American male. From the point of view of Social Security, they are identical in every respect: they are the same age, they make the same contributions of the same amount on the same date, and retire at the same time. For the sake of argument, suppose they begin taking SS payments at age 70.
Given that Alice and Bob have made exactly equivalent contributions to the system, you would expect their payout to be roughly comparable. This is not even close to being the case, because of differences in life expectancy between different demographic groups. Asian-American life expectancy is 86.3 years, while for African-Americans, it is 75.0 (source) Furthermore, women enjoy about a 4 year life expectancy advantage over men. So Alice can expect to live to about 88 years, while Bob can only expect to live to about 73. That means Alice receives a 6x greater benefit from SS than Bob - 18 years of payments vs 3 years, in expectation - even though they contributed the exact same amounts.
Having a budget where initial creation is essentially free (fun!) while maintenance is extremely expensive (drugery!) is a dramatic exaggeration for most software development.
My feeling is that most software development has exactly the same cost parameters; the difference is just that BigTech companies have so much money they are capable of paying thousands of engineers handsome salaries, to do the endless drudgery required to keep the tech stacks working.
The SQLite devs pledge to support the product until 2050.
Thanks for the positive feedback and interesting scenario. I'd never heard of Birobidzhan.
Thanks for the tip about Kusto - it actually does look quite nice.
My prediction is that the main impact is to make it easier for people to throw together quick MVPs and prototypes. It might also make it easier for people to jump into new languages or frameworks.
I predict it won't impact mainstream corporate programming much. The dirty secret of most tech companies is that programmers don't actually spend that much time programming. If I only spend 5 hours per week writing code, cutting that time down to 4 hours while potentially reducing code quality isn't a trade anyone will really want to make.
Why isn't this an argument for banning all politically powerful people from Twitter?
One very important observation related to this issue is the fact that we often observe specific cognitive deficits (e.g. people who can't use nouns) but those specific deficits are almost always related to a brain trauma (stroke, etc.) If there were significant cognitive logic coded into the genome, we should see specific cognitive deficits in otherwise healthy young people caused by mutations.
I'm not sure exactly what you mean, but I'll guess you mean "how do you deal with the problem that there are an infinite number of tests for randomness that you could apply?"
I don't have a principled answer. My practical answer is just to use good intuition and/or taste to define a nice suite of tests, and then let the algorithm find the ones that show the biggest randomness deficiencies. There's probably a better way to do this with differentiable programming - I finished my Phd in 2010, before the deep learning revolution.
In my Phd thesis I explored an extension of the compression/modeling equivalence that's motivated by Algorithmic Information Theory. AIT says that if you have a "perfect" model of a data set, then the bitstream created by encoding the data using the model will be completely random. Every statistical test for randomness applied to the bitstream will return the expected value. For example, the proportion of 1s should be 0.5, the proportion of 1s following the prefix 010 should be 0.5, etc etc. Conversely, if you find a "randomness deficiency", you have found a shortcoming of your model. And it turns out you can use this info to create an improved model.
That gives us an alternative conceptual approach to modeling/optimization. Instead of maximizing a log-likelihood, take an initial model, encode the dataset, and then search the resulting bitstream for randomness deficiencies. This is very powerful because there is an infinite number of randomness tests that you can apply. Once you find a randomness deficiency, you can use it to create an improved model, and repeat the process until the bitstream appears completely random.
The key trick that made the idea practical is that you can use "pits" instead of bits. Bits are tricky, because as your model gets better, the number of bits goes down - that's the whole point - so the relationship between bits and the original data samples gets murky. A "pit" is a [0,1) value calculated by applying the Probability Integral Transform to the data samples using the model. The same randomness requirements hold for the pitstream as for the bitstream, and there are always as many pits as data samples. So now you can define randomness tests based on intuitive contexts functions, like "how many pits are in the [0.2,0.4] interval when the previous word in the original text was a noun?"
Cool concepts! What tech stack did you use? Was it painful to get the Facebook API working?
Not a stupid question, this issue is actually addressed in the essay, in the section about interior modeling vs unsupervised learning. The latter is very vague and general, while the former is much more specific and also intrinsically difficult. The difficulty and preciseness of the objective make it much better as a goal for a research community.
I started this essay last year, and procrastinated on completing it for a long time, until recently the GPT-3 announcement gave me the motivation to finish it up.
If you are familiar with my book, you will notice some of the same ideas, expressed with different emphasis. I congratulate myself a bit on predicting some of the key aspects of the GPT-3 breakthrough (data annotation doesn't scale; instead learn highly complex interior models from raw data).
I would appreciate constructive feedback and signal-boosting.
I would add two ideas:
- Try to find a good role model - someone who is similar to you in relevant respects, is a couple of years ahead of you, who has done something you think is awesome, and who you can talk to and observe to some extent. Bill Gates is probably not a good role model.
- Try to form a realistic assessment of how important college actually is; people often err in imagining it to be more or less important than it is in reality (these errors seem to be correlated with social class). I would estimate that the 4 years of college are only modestly more important than other years of your life. What you do right after college is important. What you do when you're in your late 20s is important.
Holden is a smart guy, but he's also operating under a severe set of political constraints, since his organization depends so strongly on its ability to raise funds. So we shouldn't make too much of the fact that he thinks academia is pretty good - obviously he's going to say that.
Interesting analysis. I hadn't heard of Goodman before so I appreciate the reference.
In my view the problem of induction has been almost entirely solved by the ideas from the literature on statistical learning, such as VC theory, MDL, Solomonoff induction, and PAC learning. You might disagree, but you should probably talk about why those ideas prove insufficient in your view if you want to convince people (especially if your audience is up-to-date on ML).
One particularly glaring limitation with Goodman's argument is that it depends on natural language predicates ("green", "grue", etc). Natural language is terribly ambiguous and imprecise, which makes it hard to evaluate philosophical statements about natural language predicates. You'd be better off casting the discussion in terms of computer programs, that take a given set of input observations and produce an output prediction.
Of course you could write "green" and "grue" as computer functions, but it would be immediately obvious how much more contrived the program using "grue" is than the program using "green".