So far, it seems to work pretty well, and it's very quiet in standby mode - roughly similar to the fridge. But every time I fry anything on the stove, the fan automatically speeds up to the highest level, which is much louder, roughly similar to a typical conversation. On the bright side, though, at least that proves that it works.
Sorry, let me try again, and be a little more direct. If the New Center starts to actually swing votes, Republicans will join and pretend to be centrists, while trying to co-opt the group into supporting Republicans.
Meanwhile, Democrats will join and try to co-opt the group into supporting Democrats.
Unless you have a way to ensure that only actual centrists have any influence, you'll end up with a group that's mostly made up of extreme partisans from both sides. And that will make it impossible for the group to function as intended.
I see a few other failure points mentioned, but no one has mentioned what I consider the primary obstacle - if membership in the New Center organization is easy, what prevents partisans from joining purely to influence its decisions? And if membership is hard, how do you find enough people willing to join?
The key idea that makes Bitcoin work is that it runs essentially a decentralized voting algorithm. Proof-of-work means that everyone gets a number of votes proportional to the computational power that they're willing to spend.
You need something similar to proof-of-work here, but I don't see any good way to implement it.
To save you a click, I've copied the example visualization on the homepage below. It shows all of the variables in the entire stack at the specified point in the execution.
It's all auto-generated, so it doesn't support more complex visualizations like your Container With Most Water example. But maybe it could be extended so that the user could define custom visualizations for certain objects?
Edit: As others have pointed out, this is not the best strategy.
Nice problem! The best strategy seems to be to mix the red clay with the blue clay in small infinitesimal steps. Every bit of red clay then becomes as cold as possible, meaning that as much energy as possible is transferred to the blue clay.
Here's what we get with two steps:
Mix 1/2 red at 100 degrees + 1 blue at 0 degrees => 33.33 degrees.
Now remove the red clay, and add the other half.
1/2 red at 100 degrees + 1 blue at 33.33 degrees => 55.55 degrees.
Now let's compute this with n steps. So at each step we add a 1/n fraction of the red clay to the blue clay, then remove it. Let the temperature of the blue clay after k steps be Tk.
Adding a 1/n piece of red clay to the blue clay:
To simplify our expression, let r be the ratio: nn+1.
Unrolling our definition gives a geometric series:
Replacing the geometric series with its sum:
Now rn+1=(nn+1)n+1=(1−1n+1)n+1. As n approaches infinity, it's well-known that this approaches 1/e.
Tn=100(1−1/e) = 63.2 degrees.
As a final note, the LaTeX support for LessWrong is the best I've seen anywhere. My thanks to the team!
If your analysis of the game theory of the situation is correct, we would expect that the military occasionally makes concessions to share power, but also violently reasserts their full control when they thinks it's necessary. Do you see any way for the country to break out of that cycle?
For example, how effective do you think new international sanctions will be at curbing the violence?
Google is the prime example of a tech company that values ethics, or it was in the recent past. I have much less faith in Amazon or Microsoft or Facebook or the US federal government or the Chinese government that they would even make gestures toward responsibility in AI.
I work for Microsoft, though not in AI/ML. My impression is that we do care deeply about using AI responsibly, but not necessarily about the kinds of alignment issues that people on LessWrong are most interested in.
Microsoft's leadership seems to be mostly concerned that AI will be biased in various ways, or will make mistakes when it's deployed in the real world. There are also privacy concerns around how data is being collected (though I suspect that's also an opportunistic way to attack Google and Facebook, since they get most of the revenue for personalized ads).
The LessWrong community seems to be more concerned that AI will be too good at achieving its objectives, and we'll realize when it's too late that those aren't the actual objectives we want (e.g., Paperclip Maximizer).
To me those seem like mostly opposite concerns. That's why I'm actually somewhat skeptical of your hope that ethical AI teams would push a solution for the alignment issue. The work might overlap in some ways, but I think the main goals are different.
I wonder whether everyone would be better off if they automatically reversed any tempting advice that they heard (except feedback directed at them personally). Whenever they read an inspirational figure saying “take more risks”, they interpret it as “I seem to be looking for advice telling me to take more risks; that fact itself means I am probably risk-seeking and need to be more careful”. Whenever they read someone telling them about the obesity crisis, they interpret it as “I seem to be in a very health-conscious community; maybe I should worry about my weight less.”
Of course, some comments noted that this meta-advice is also advice that you should consider reversing - if you're on LessWrong, you're already in a community that's committed to testing ideas, perhaps to an extreme degree.
For myself, when it comes to advice, I usually try to inform rather than to persuade. That is, I present the range of opinions that I consider reasonable, and let people make their own decisions. Sometimes I'll explain my own approach, but for most issues I just hope to help people understand a broader range of perspectives.
This does occasionally backfire - some people are already committed strongly to one side, and a summary of the opposite perspective that sounds reasonable to me sounds absurd to them. In some cases, I've trapped myself into defending one side, trying to make it sound more reasonable, while I actually believe the exact opposite. And that tends to be more confusing than helpful.
But as long as I stick to following this strategy only with friends that are already curious and thoughtful people, it generally works pretty well.
(And did you catch how I followed this strategy in this comment itself?)
I understand your argument that there's a systematic bias from tracking progress on relatively narrow metrics. If progress is uneven across different areas at different times, then the areas that saw progress in the recent past may not be the same areas in which we see progress today.
You don't seem to make any suggestions on what would be a better metric to use. But to me it seems like the simplest solution is just to use broader metrics. For example, instead of tracking the cost of installing solar panels, we could measure the total cost of our electric grid (perhaps including environmental concerns such as carbon emissions as one part of that cost).
Along those lines, the broadest metrics we have are macroeconomic statistics such as GDP per capita. The arguments I've seen for stagnation (mostly from Jason Crawford or Tyler Cowen) already use the recent observed slowdown in GDP growth extensively.
If we see the same trend across most areas and most levels of metrics (both narrow, specific use cases and overall summary statistics) - isn't that strong evidence in favor of the stagnation hypothesis?
Or do you think there are no reliable metrics for measuring progress as a whole?
Thankfully, toasters don't often burn houses down, and so this cost is low (under 1%) for most products. (I'm interested in examples of physical goods for which this is not the case.)
The first example that I thought of is the "wedding tax" - that is, anything that's purchased specifically for a wedding is significantly more expensive than the same item purchased for a different event. This includes both services (e.g., photography) and physical items (e.g., a cake).
But my wife and I both cared very little for most of those things. If we had known better, we wouldn't have hired as many vendors as we did. We concluded after our wedding that most of the money we spent wasn't worth it.
Of course, I don't mean to judge those who do spend extravagantly on their wedding, if that's what they truly want. But I think part of the reason people spend so much is because of social expectations, not because it actually makes them happy.
On memorizing piano pieces: I took several years of piano lessons. I once learned to play a piece from memory, only to forget the opening chord at the recital. I was completely stuck, so after a minute of trying I had to just apologize and sit down again. It was likely the most embarrassing experience of my childhood.
I learned the hard way that just playing the piece every day was enough for my "muscle memory" to know what to do. At some point, during practice, my hands would just play the piece correctly without any conscious thought or effort. But that was actually my downfall - my muscle memory could easily evaporate when I was in a different setting. And under pressure, my conscious brain discovered that it wasn't prepared to step in and help.
After that, my piano teacher helped me try some other techniques. The strategy that worked best for me was to both practice daily to engage my muscle memory, and also to consciously memorize the exact notes in several key measures (usually the opening of each section).
I mostly agree with your thesis, but I noticed that you didn't mention agriculture in the last section, so I looked up some numbers.
The easiest stat I can find to track long-term is the hours of labor required to produce 100 bushels of wheat .
1830: 250 - 300 hours
1890: 40 - 50 hours
1930: 15 - 20 hours
1955: 6 - 12 hours
1965: 5 hours
1975: 3.75 hours
1987: 3 hours
That source stops in the 1980s, but I found another source that says the equivalent number today is 2 hours . That roughly matches the more recent data on total agricultural productivity from the USDA, which shows continued improvement, but not on the scale of the mid-1800's .
On the format: Since you asked for feedback, I found this format a little harder to follow than other LessWrong posts. For me, short paragraphs are great when used sparingly to make a particular point punchier. But an entire post like that feels like someone is talking too quickly and not giving me time to think. (I also don't read Twitter, so perhaps it's just not well-suited for me.)
On the content: Robert Kegan's "Immunity to Change" framework addresses some of this, especially the "shadow values" (which he calls hidden commitments). I learned the framework from this book, which was very helpful for me last year in uncovering some of my own hidden commitments (as well as for a few other reasons): https://www.amazon.com/How-Talk-Change-Work-Transformation-ebook/dp/B003AU4DX2/.
I hadn't thought of applying this to productivity systems, though. That's very insightful, and definitely an area where I still experience tension. So I think this will be helpful, thanks!
I don't have the philosophical sophistication to explain this as clearly as I would like, but I think fiction is valuable to the extent that it can be "more true" than a literal history.
Of course, fiction is obviously false at the most basic level, since the precise events it records never actually happened. But it can be effective at introducing abstract concepts. And except for trivia competitions, the abstract patterns are usually what's most useful anyway.
The best analogy I can think of is lifting weights. Fiction is an artificial gym that trains our minds to recognize specific patterns, much as weight lifting uses artificial movements that target specific muscle groups.
Fiction works by association, which as you suggest is how our minds tend to operate by default already. So at a minimum, wrapping ideas in a fictional story can make them more memorable. For example, people who memorize decks of cards tend to use "memory palace" techniques that associate the cards with vivid events.
The knowledge we gain from reading fiction is largely subconscious, but for me the most important part is the ability to understand how people who are different from me will think or act. This can also inspire me to think and act more like the role models I've observed.
There are other purposes in reading fiction - some fiction is meant mainly for entertainment. But I think most of what people would consider classics aim to teach something deeper. Perhaps what you experience as meaningful in reading Lord of the Rings is related to this?
Of course, there is the danger that reading bad fiction will make you less informed than you would have been otherwise. And the fact that learning occurs mostly subconsciously exacerbates this problem, since it can be difficult to counter a faulty narrative once you've read it.
But fiction seems no more dangerous to me than any other method of getting information from other people. Even sticking strictly to facts can be misleading if the facts are selectively reported (as occurs frequently in politics).
I do need to think some more about your point about how exactly to distinguish what part of a story is fictional and what can be treated as true. I don't have a clear framework for that yet, though in practice it rarely seems to be an issue. Do you have an example of a time you felt misled by a fictional story?
Overall, I think my understanding of the world, and especially of people, would be greatly impoverished without fiction.
Several months ago, some people argued that trying to develop a vaccine for COVID-19 was pointless, because the "common cold" includes several types of coronaviruses, which have never had a successful vaccine.
Now that we have multiple successful vaccines for COVID-19, could we use the same methods to produce a vaccine for the common cold?
Five minutes of research suggests to me that it would be worth it to try. (Caveat: I picked the first numbers I found from Google, and I haven't double-checked these.)
Spending $9 billion to save $6 billion per year (15% of $40 billion, assuming all types of colds have roughly the same severity) sounds like a good deal to me. And chances are that the cost of development could be much lower in a non-emergency situation, since we don't need so much redundancy.
No war before WWI ever had a large enough number of combatants or was deadly enough in general to make a real dent in the population.
I think that's fairly inaccurate. Just to pick the first example that came to mind:
By all accounts, the population of Asia crashed during Chinggis Khan’s wars of conquest. China had the most to lose, so China lost the most—anywhere from 30 to 60 million. The Jin dynasty ruling northern China recorded 7.6 million households in the early thirteenth century. In 1234 the first census under the Mongols recorded 1.7 million households in the same area. In his biography of Chinggis Khan, John Man interprets these two data points as a population decline from 60 million to 10 million.
1. Thanks, I've had much better experiences with my landlord, but your experience might be more typical. Lack of adequate insulation is a clear problem, and one that's potentially worsened by the current system in which landlords pay for installing insulation but tenants generally pay for electricity. It's also the kind of issue that wouldn't become known to the tenants until after they've already moved in. So it makes sense to me that this would require legislation.
The process you propose for maintaining quality sounds reasonable enough. It might even be less susceptible to abuse than the current system of requiring security deposits, which the landlord can decide whether or not to refund. I've never experienced abuse of that type, but it wouldn't surprise me if it's relatively common.
2. I agree there's a lot more design work to do here. But before diving into that, I'm not entirely convinced by this point:
If you use [the amount that people actually spend] as your optimization metric, as our cities currently do, you get overpriced services.
When I think about which services are overpriced, the first ones that come to mind are college tuition and healthcare. But the primary cost drivers there are not rent, so I don't think your proposal would affect them very much.
If we limit our discussion to services that are overpriced due to high rent costs, the only one I can think of is restaurants. I've never seen an actual restaurant's budget, but I've heard that their costs are generally split evenly into rent, salaries, and the cost of the food itself. And it makes sense that rent would be a major cost, since table space at restaurants is clearly inefficient - even in the pre-pandemic world, restaurants often operated at capacity for only a few hours each weekend. So I'll grant that there's likely room for improvement there.
One of the main drawbacks I see in this system is that it provides little incentive for anyone to improve the value of their own property, or even to maintain it. The benefit of a market system is that it does provide this incentive, which I think is much more important than you admit here.
High housing costs at least lead to "skin in the game." Without that, you probably need regulations to ensure that everyone maintains their property at a certain minimum level, and regular inspections to enforce it. I don't see anything like that mentioned here - do you have any thoughts on how it would work, and how much the overhead costs would be?
Besides that, I'm also concerned about the effects on commercial property. You commented below that being assigned a store location would look more like winning a local election than signing a lease.
That sounds like another large source of overhead. I believe the amount that people actually spend at a store is a better measure of the value they derive from it than their voting could be. You could try to redesign your system to use store revenue as propinquity votes - but why? The free market already does that efficiently with no extra effort required.
Comment by timothy-johnson on [deleted post]
There are only two ways I could see a marriage working longterm between two people in their early-to-mid twenties:
They grow as individuals and as partners together, and remain a great match even after both of them change drastically over time.
They decide to each sacrifice their own pursuits of personal growth for the relationship.
"Personal growth" can mean a lot of different things to different people, but my experience is precisely the opposite of what you suggest.
I'm 28, and happily married for two years now. One of the things I like most about my wife is the way that she encourages my personal growth, and I think she would say the same about me. That both makes us better people and makes our relationship stronger.
Being married does come with a few tradeoffs. For example, since my wife is in academia and I'm in software engineering, I'm committed to move wherever we need to for her job. (When we got married that seemed like a sacrifice, though COVID has made it kind of a moot point now.) But any logistical difficulties are far outweighed by the fact that I like who I am much better when I'm with her.
We haven't had kids yet. When we do, I expect that I'll have to scale back at work. But I also think having kids is one of the best ways to push myself to become more patient and selfless, which to me is worth more. (And my workplace, like others in BigTech, is pretty supportive of family life.)
I do feel very fortunate to have the job and the marriage that I do. But at the same time, I think most LessWrongers are capable of having the same if they want it. What part of your personal growth do you expect you would need to sacrifice to maintain a marriage and/or a family?
"Efficiency is the goal, which means many iterations, but also getting as much information as possible out of each iteration. One of my big rules has always been, “double it, or cut it in half.” Don’t waste your time adjusting something by 5 percent, then another 5 percent, then another . . . just double it, and see if it even had the effect you thought it was going to have at all. If it went too far, now you know you’re on the right track, and can drop back down accordingly. But maybe it still didn’t go far enough, and you’ve just saved yourself a dozen iterations inching upward 5 percent at a time."
There's probably some dependence between fires in subsequent years, right? If an area burns one year, I would assume it's less likely to burn the next year. Could that explain the drop from 2018 to 2019?
One of my internship mentors at Google told me their average software engineer generates $1 million dollars of value for the company every year. So I don't think it's any mystery why they're paid so well.
I believe the number of solutions you get should be 12-choose-3 instead of 13-choose-4. After the number of cards in each of the first three suits is chosen, the number of cards in the fourth suit is already determined.
I was talking with a friend just tonight about how scientific journals take months to get submissions reviewed, even though most reviewers do all of the reviews at the last minute, or pass it on to their grad students.
I think academia could be significantly less stressful if everyone actually finished reviews promptly. It's very demoralizing (at least in my experience as a grad student) to pick up a paper that you thought was done months ago and have to fix it up again after it's been rejected.
But unfortunately, there's little benefit in that for the reviewers themselves. And academia seems particularly resistant to any attempts to change things.
If I understand correctly, you say that two people's probabilities can fail to converge because they differ in whether they trust the source of some new information.
But to me, that seems like it only pushes the paradox back one step. Shouldn't everyone's estimates of how trustworthy each person is also tend to converge? Or maybe most people just don't think that far?
Your example also doesn't explain the backfire effect. You assume that everyone moves in the direction of the information that they're given, just perhaps by different amounts. But can you explain why people might update their beliefs in opposite directions?