Posts
Comments
If you make a car with a max speed of 65mph by decreasing the amount of force available, it will be:
- impossible to pass cars safely, because you won't be able to overtake quickly
- very difficult to maneuver while going near 65 mph, because you won't be able to accelerate quickly
- very annoying to change lanes, because while you are vectored to the side, you will be going less than 65 miles per hour down the road because you will max out at (65 * sin(theta)) mph, making it difficult to speed up while changing lanes, which is often considered good form
- very difficult to go up steep hills at speed, because you will use most of your power fighting gravity.
It's very difficult to decrease max speed by decreasing performance without creating a car that is much worse.
To paraphrase the hiker's saying "regret is mandatory, suffering is optional". What you describe as not feeling regret to me sounds like feeling regret but not suffering because of it. Knowing that you could have made a better choice is an act of feeling regret for the choice you did make. Suffering as a result of it is bad for you (it's suffering, after all), and it sounds like you don't suffer when you regret. This is a good place to be! It's good to both recognize that there were better possibilities, and maybe you can aspire to pick better next time, but maybe you did ultimately do as good as you could have done in that situation, so beating yourself up wouldn't be useful.
We may disagree on the semantics of the word regret, so imagine I'm saying regret_{alex} for my version, and regret_{gordon} for your version.
The original post has much more value than the one-sentence summary, but having a one-sentence explanation of the commonality between the mathematical example and the programming example can be useful.
I would say it is perhaps not nice to provide that sort of summary but it is kind.
The general lesson is that understanding the thing directly is better than understanding someone else's explanation of the thing.
This is a massive misread of the article. The benefit of lifting is the feeling of joy in the merely material, and of transforming the feeling of being embodied from a feeling of trappedness to a feeling of capabilities being granted to you.
Until I’d gained some muscle, I didn’t know that getting out of bed shouldn’t actually feel like much, physically, or that walking up a bunch of stairs shouldn’t tire you out, or that carrying groceries around shouldn’t be onerous. I felt cursed by the necessity of occupying space while shuffling around this mortal coil. And now I do not. Moreover, I no longer feel that I need some special justification for existing, because simply residing in the material is now a privilege.
The first time I moved apartments after I started seriously lifting, I enjoyed it. I had always suffered while moving before lifting, ending up sore and tired and cranky, but after lifting, I didn't feel any negatives.
Unfortunately, all of life is a virtue ethics/game theory context.
Excellent post. To do things, do things, and surround yourself with people who do things. To do things better, you have to practice rationality, but without some specific target goal, you can't evaluate whether you are being successful at your practice of rationality.
I've noticed a similar thing with Anki flashcards, where my brain learns to memorize responses to the shape of the input text blob when I have cards that are relatively uniquely-shaped. I have to swap around the layout every few months to ensure that the easiest model to subconsciously learn is the one that actually associates the content on the front of the card with the content on the back of the card.
Less so under potentially adversarial conditions, when there are politics/culture-war aspects. For example, many people have large personal and social incentives to convince you of various ideas related to UFOs. In that case, it may not be the correct move to engage with the presented arguments, if they are words chosen to manipulate and not to inform. Do not process untrusted input,.
I'm curious if you think that this formulation of the above idea is still antithetical to epistemic rationality.
Kids can be surprisingly useful resources at a surprisingly early age.
On farms, as you've said, kids can figure out what to do and help out easily. If your work requires a lot of low-skill repetitive manual labor, kids can do that, and it can help teach them how to do your slightly higher-skill labor next year.
This does not apply if you work as an engineer, or in an office, or many other cases where specific skills contingent on mostly-finished-developing brains are required to do your work and there is no manual labor that you can offload to children. If you expect your kid to go through the standard college route, there are 22 years of waiting before they can really do anything useful to help with your labor.
It feels like an important metric is "karma per view" or "karma per read" or "karma per user-minute-looking-at-text" something similar. Currently, we can't gauge that, and so when someone who gives a strong prior that their post will be worth reading posts, that post will get more views which means more upvotes even if they have a similar "karma per read".
EDIT: EY's post loaded the second I posted this, but I promise it was an independent invention
What asymmetries did you introduce into your simulations that lead to a difference? Models with no gender differences but with mandatory sexual reproduction usually tend to be 50/50 in my experience.
Excellent post. Well-researched, with important caveats put right where I was about to ask for a clarification, and solidly straddles the line between life advice and teaching new facts about the world. As someone with alcoholic's genes who gets almost no hangovers, I probably won't be trying this, but I appreciate knowing that activated charcoal might help if I do start to worry about hangovers.
I noticed the same thing, and realized that I was feeling the LW-tribe feeling of "people say they like real science but like fake science, only me and my in-group like real science". It was also annoying for me as I watched it, but I think that responding specifically to that phrase is as much a tribal signaling thing as using it is.
For a good paper on this topic, I have to recommend Werfel et al. 2017:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0173677
They make a spatial model of a world where resources replenish at a fixed rate and show that mortal populations outcompete immortal populations by improving their children's fitness, as there are fewer mass starvation events.
Hi - I like this post and I'm glad you were able to put 60% of the value of a book into a table! One question I had - you say that IVF costs $12k and surrogacy costs $100k, but also that surrogacy is only $20k more than IVF? That doesn't add up to me.
Also, sperm/egg donation are usually you getting paid to give those things, which help you have children technically. But those children are probably not being raised by you, so a lot of the benefits you cite, like playing with grandchildren, might be smaller for children created with donated gametes than children you bear and raise yourself.
I think winning at sports is more of a thing that lead to our ancestors increasing their chances of procreation. Would you feel as happy about your relatives becoming sperm or egg donors?
To me, executing adaptations that probably made my ancestors increase their chances of procreation does make me happier (flirting successfully with people, feeling high-status, eating good food, etc), but not the things that actually maximize my current inclusive fitness. Otherwise, I would be really happy about the thought of becoming a sperm donor! You might be interested in this post about executing adaptations.
Can you clarify? Are you saying that you are only happy while actively procreating or increasing your children/relatives' chance of precreation?
Book review is now up here for those interested.
Narrow Roads of Gene Land Volume 2: Evolution of Sex by W. D. Hamilton claims to cover this topic, and I just got a copy. I plan on reading and writing a book review, because I suspect that Hamilton has some good theories that LW would be interested in, given this post and recent interest in the evolution of sex.
For a more iterative approach that isn't guided by theory, you can do small experiments whenever you are taking a photo. When you are taking a picture of something, try any or all of the following and see which come out better:
- Flash vs no flash.
- Move the camera up, down, left, and right. See
- Move the camera closer or further away, possibly zooming to compensate.
- Move your subject to change the background.
- Try increasing or decreasing the amount of bokeh.
- If you have a friend nearby, try adding "off-camera flash" by having them hold up their phone flashlight.
Over time, you can build an intuition for which of these things are likely to help.
This post is excellent. The airplane runway metaphor hit home for me and I think it will help me explain my worries about exponential growth to other people more clearly than graphs, so thanks for writing it up!
One typical concern around building friendly AI that is slower or less effective than unfriendly AI is that a smarter unfriendly AI will be able to win against any non-smart friendly AI. Your proposal doesn't seem to address this problem.
It's also not clear how your proposal would prevent an AI running on a blockchain from using its (limited due to blockchain stuff?) power to create a copy of itself running on more traditional computing hardware.
I like this post's idea, and reversing the causal arrow. Most people think that life philosophy causes life outcome, so they look for the right life philosophy, but if it were the opposite you should be chasing the life outcome, and then you will end up with the life philosophy.
I don't get the first two paragraphs at all, though. Are you trying to say that Sapphire was doing something that everyone could learn how to do, but disguised it behind a mystic pretense, and that was bad? I don't think I fully get how it ties into the rest of the post (although I haven't seen the show, so I may be missing something).
I'd recommend GURPS as a base game. It is a very flexible toolkit for making RPGs. It already has some mechanics for things like "Enhanced Time Sense" and the ability to create characters that exist as intelligences without physical bodies other than computational hardware.
On a separate note, I think that incidents are the opposite of this - that requires that people go through and find what is wrong immediately, because the response is urgent. If anything, a Root Cause Analysis after the fact would be more similar. Or possibly the outage investigation. You might be interested in Julia Evan's debugging puzzles, which are small-scale but good introductions. I could imagine similar scenarios with real servers (and a senior dev moderating) being good training on debugging server issues and learning new tools.
There is some extent to which you need long-term software project experience to learn how to deal with maintenance and extensibility over multiple years. However, there is still some benefit for devs who are fresh out of college and haven't done software maintenance. A lot of junior devs realize their designs are bad when asked "What if you need to add X later?". And these decision making training games would help with that.
How would you feel about someone being given a pile of code, and having to add a feature that requires modifications throughout the codebase? That could be a decent simulation of working in a large legacy codebase, and could be used in a similar game context, where you do a debrief on what aspects of the existing code made it easy/hard to modify, and review the code that the dev playing the game wrote.
I would love to hear an example of this in more detail - I think I understand the things you are talking about but an example would help make sure.
Something I do in some conversations is use mathematical concepts like "normally distributed around X". I think this partially fits the thing you are talking about, but I find it has two additional benefits for conversation. First, it can help specify a topic more clearly and concisely for people that understand. And second, it can let people know that you know about math/stats and lets them start using similar terms in response, sometimes allowing conversations to go deeper fast because both parties know that the other party will understand more niche concepts that can be used for analogy.
I'd be careful with thinking of prepping as a binary "do/don't prep" distinction. If you live somewhere where a civil war happens every 2-3 years, the expected value of something that only has value in a civil war scenario is much higher than if one happens every 150 years or so. However, that doesn't mean you should "prep" in one case and not the other, just that some actions that would be worth it if civil wars were frequent are not worth it if civil wars are infrequent. Water may be useful in both, but training your friends in wilderness survival or whatever, maybe less so.
I don't think I understand the question. If something is expensive but will definitely (let's say with 99% certainty) save my life (which I think is the sort of thing you are describing as expensive_but_must), I would buy it at almost any cost.
buying a bunker is not frequent that much anymore
Are you sure? The second doom boom is here, and people are buying bunkers again.
The difference between bunkers and water is not just the cost, but the probability of needing one - there are many non-nuclear-war cases for wanting water on hand. So water has a higher probability of being useful, and a lower cost.
Most places have water, but how close is it to where you live? If you don't have a way of storing a significant amount of water, and live far enough from your local water source that you would have to drive, there is a benefit to having enough water storage so that you can transport a reasonable amount of water per trip.
But I agree that having a water filter on hand is useful in cases where you have access to water, but you aren't sure whether it's safe to drink or not.
I think your definition of perfect model is a bit off - a circuit diagram of a computer is definitely not a perfect model of the computer! The computer itself has much more state and complexity, such as temperature of the various components, which are relevant to the computer but not the model.
Containing a copy of your source code is a weird definition of a model. All programs contain their source code, does a program that prints it source code have more of a model of itself than other programs, which are just made of their source code? Human brains are not capable of holding a perfect model of a human brain, plus more, because the best-case encoding requires using one atom of the human brain to model an atom in the human brain, leaving you at 100% capacity.
The key word is "perfect" - to fit a model of a thing inside that thing, the model must contain less information than the thing does.
This is a really good illustration of the 5 and 10 problem. I read the linked description, but didn't fully understand it until you walked through this example in more depth. Thanks!
One possible typo: You used 5a and 5b in most of the article, but in the original list of steps they are called 6 and 7.
This is definitely not a "big problem" in that we can use math regardless of what the outcome is.
It sounds like you're arguing that semantic uniformity doesn't matter, because we can change what "exists" means. But once you change what "exists" means, you will likely run into epistemological issues. If your mathematical objects aren't physical entities capable of interacting with the world, how can you have knowledge that is causally related to those entities? That's the dilemma of the argument above - it seems possible to get semantic uniformity at the expense of epistemological uniformity, or vice versa, but having both together is difficult.
I'm not super up-to-date on fictionalism, but I think I have a reasonable response to this.
When we are talking about fictional worlds, we understand that we have entered a new form of reasoning. In cases of fictional worlds, all parties usually understand that we are not talking about the standard predicate, "exists", we are talking about some other predicate, "fictionally-exists". You can detect this because if you ask people "do those three Jedi really exist?", they will probably say no.
However, with math, it's less clear that we are talking fictionally or talking only about propositions within an axiomatic system. We could swap out the "exists" predicate with something like "mathematically-exists" (within some specific axiom system), but it's less clear what the motivation is compared to fictional cases. People talk as if 2+2 does really equal 4, not just that its useful to pretend that it's true.
Hi! I really appreciate this reply, and I stewed on it for a bit. I think the crux of our disagreement comes down to definitions of things, and that we mostly agree except for definitions of some words.
Knowledge - I think knowledge has to be correct to be knowledge, otherwise you just think you have knowledge. It seems like we disagree here, and you think that knowledge just means a belief that is likely to be true (and for the right reason?). It's unclear to me how you would cash out "accurate map" for things that you can't physically observe like math, but I think I get the gist of your definition. Also, side note, justified true belief is not a widely held view in modern philosophy, most theories of truth go for justified true belief + something else.
Real - We both agree it doesn't matter for our day-to-day lives whether math is real or not. (It may matter for patent law, if it decides whether math is treated as an invention or a discovery!) I think that it would be nice to know whether math is real or not, and I try to understand the logical form of sentences I utter to know what fact about the world would make them true or false. So you say I "don't have to worry about" whether numbers are real, and I agree – their reality or non-reality is not causing me any problem, I'm just curious.
I also view epistemic uniformity as pretty important, because we should have the same standards of knowledge across all fields. You seem to think that mathematical knowledge doesn't exist, because mathematical "knowledge" is just what we have derived within a system. I can agree with that! The Benacerraf paper presents a big problem for realism, which you seem to buy - and you're willing to put up with losing semantic uniformity for it.
I think our differences comes down to how much we want semantic uniformity in a theory of truth of math.
Hi - looks like you did a relative link (https://www.lesswrong.com/capital-gains-in-agi-big.png) but you want this absolute link instead: https://www.jefftk.com/capital-gains-in-agi-big.png
Benacerraf convinced me that either mathematical sentences have different logical forms than non-mathematical sentences or that mathematical knowledge has a different form than non-mathematical knowledge. It sounds like your view is that mathematical sentences have different forms (they all have an implicit "within some mathematical system that is relevant, it is provable that..." before them), and also that mathematical knowledge is different (not real knowledge, just exists in a system). In other words, it sounds like you just think that epistemic uniformity and semantic uniformity are not important features of a theory of mathematical truth. That comes down to personal aesthetics and meta-beliefs about how theories should look, so I will just talk about what I think you're saying in this comment.
I think what you're trying to say is that mathematical statements are not true or false in an absolute sense, only true and false within a proof system. and their truth or falsehood is based entirely on whether they can be derived from within that system.
If that's true, math is just a map, and maps are neither true nor false. If math is just a map, then there is no such thing as objective mathematical truth. So it sounds like you agree that knowledge about any mathematical object is impossible. But when you say that "Epistemic uniformity simply states that math is a useful model", I think that's a little different than what I intended it to mean. Epistemic uniformity says that evaluating the truth-value of a mathematical statement should be a similar process evaluating the truth-value of any other statement.
The issue here is that our non-mathematical statements aren't only internally true or false, they are actually true or false. If you asked someone to justify sentence (1), and they handed you a proof about New York and London, consistent on a set of city-axioms, you would probably be pretty confused. Epistemic uniformity says that a theory of mathematical truth should look relatively like the rest of our theories of truth - why should math be special?
I'm going to take a slight objection to using the phrase "discoverable experimentally" to describe proving a theorem and thinking up numbers, but let's talk about those examples. To me, it sounds like that is doing work within a system of math to determine whether a claim is consistent with axioms. There is some tension here between saying that math is just a tool and thinking that you can do experiments on it to discover facts about the world. No! It will tell us about the tool we are experimenting with. Doing math (under the intutionist paradigm) tells us whether something is provable within a mathematical system, but it has no bearing on whether it is true outside of our minds.
(Side note about intuitionism:
I think it's important to prevent talking past each other by checking definitions, so I'd like to clarify what you mean by intuitionism. In the definition I'm aware of, intuitionism says roughly that math exists entirely in minds, and the corresponding account of mathematical truth is that a statement is true if someone has a mental construction proving it to be true. Please let me know if this is not what you meant!
My main objection with intuitionism is that it makes a lot of math time-dependent (e.x. 2+2 didn't equal 4 until someone proved it for the first time). Under an intuitionist account of mathematical truth, you can make sentence (2) true by finding three examples that fit. But then that statement's truthhood or falsehood is independent of whether the mathematical fact is really true or false (intuitionists usually don't think there exist universal mathematical truth). It seems to me that math is a real thing in the universe, it was real because humans comprehended it, and it will remain real after humans are gone. That view is incompatible with intuitionism.)
(Another note - can you be a bit more specific about the contradition you think is avoided by giving up Platonism? I think that you still don't have epistemic and semantic uniformity with an intuitionist/combinatorial theory of math)
I think one issue is that comment trees are just not the ideal format for conversation. It's pretty common that someone will make a comment with four different claims, and then ten different comments of which two make similar objections to one, and then a couple other objections, and those will be responded to, and it's all very ad-hoc. Structure doesn't spontaneously emerge, and having to scan through a whole disordered tree to understand the current state of the argument makes it hard for bystanders to join.
Having a section for posts-for-the-month (or "Ongoing Discussions"?) would help people that want long-running discussions so they would all be in the same place, over only a few threads. But the comment thread discussion format is not great.
This would be a large engineering effort, but support for argument maps could help record the structure of an argument, and also help discussion continue, as each new node could be a new post, and having a visual of the state of an argument could help keep things organized, compared to comment trees?
I really enjoyed reading this, and learning about the symmetries between electron shells and nuclear/proton shells. It wasn't clear to me why the nuclear waste in concrete is safe to hug - from a quick google, apparently all matter blocks gamma radiation, and concrete is made out of matter. Concrete is typically used because it is cheap and dense, and often "heavy concrete" made with dense industrial waste material (fly ash, slag, etc) is used for radiation shield. source
I think it still applies relatively well? If you are trying to limit growth to avoid x-risk, and anyone else is purely maximizing growth, they will grow relative to you, and your growth-limited, x-risk-avoiding faction will become irrelevant in size as the other factions grow. The point about requiring unilateral buy-in still applies: if you want to slow growth to avoid x-risk, you need unilateral buy-in, or you will quickly become irrelevant compared to people maximizing growth.
I guess it depends on how you think about profit maximization. If eliminating competition increases profit, wouldn't a profit-maximizer want to eliminate competition as well?
If you don't like doing all the non-programming work that running your own company entails, you might prefer a annoying job that still only requires programming over starting your own company.
Yeah, exactly! This whole post is meant to suggest the idea of a gun license as a cheap way of being a specific kind of optionality.
A lot of them have exceptions for items produced before the ban, like the Massachusetts "Assault Weapons Ban" that allows only AR-15 lowers produced before the ban to be used without reduced ergonomics. Also, having a license gives you the option to buy a gun before such a ban goes into effect, if they ban items that you think you would want later! If you see a ban happening, and have to wait a year for your license, you do not have an option to get whatever is being banned before the ban goes through.
Also, procuring guns has lots of downsides - storage cost, danger, theft target, etc. Reducing the time from wanting gun -> getting gun (legally) from over a year to a day gets you most of the way to the goal (which is having the option of buying a gun) without any of the downside (except the $110). So it may be a better deal.
I edited my post to make it Massachusetts specific - good point on gun accessibility variability. I didn't get into the reasons for owning a gun, because that is something I am personally less sure about and also varies widely from person to person. The main point is meant to be that a license is a cheap way to buy optionality, and I think that holds, although I may try to find some more general examples of times when you might want a gun.
Great point - edited to update with this. In Massachusetts, you need a license to own a gun or ammo, and that is the same license as a concealed carry license. I definitely over-generalized above. Thanks for pointing that out!
I didn't say anything about actually owning a gun in this post, only purchasing the right to buy a gun later! I think actually owning a gun has more potential downsides then having the right to own a gun.