Posts
Comments
Currently lots of the Earth is too cold to live in. In a warmer Earth those places would become habitable even as other places became too hot.
Arguments for devastation typically ignore adaptation, which will reduce vulnerability dramatically.
I think this is an important crux, and one that I believe. Most argument about climate change seem to assume it will just happen to us without any response other than to try to keep things exactly how they are in the face of changing weather. But obviously that's not what will happen because it never happens except in isolated pockets. In general, people find ways to adapt to their circumstances.
For what it's worth, the other crux I have in not worrying too much about climate change is that even the worst realistic forecasts don't make Earth's climate move outside its historical distribution. Humans have lived during one of Earth's colder period, but historically it's been a lot hotter. Our bodies are well adapted for heat (so long as we can cool off using sweat) and cold (so long as there's a source of fuel for fire and material to make clothes from), so it would take climate shifts beyond what's typically predicted to get outside the range of what humans seem able to adapt to.
I don't know Lomborg is right about other claims, although given the above I don't think it matters much.
Separate note, but I downvoted this post because it's unclear to me from reading the post that it justifies the strength of the claim title (or at least the strength of the claim as I interpret it). Rather than "Bariatric surgery seems like a no-brainer for most morbidly obese people", something like "Bariatric surgery recommended" or "Unclear why more people don't get bariatric surgery".
I'm not sure it's so clear cut anymore thanks to the existence of semaglutide. It seems likely to provide better tradeoffs than surgery for weight loss.
Full on smile seems like a bit much as others are suggesting, but I have seen clear benefits from something a bit more subdued.
Your neutral expression can be more or less smily. To be slightly frowny or straight neutral usually requires a slight contraction of the jaw muscles. To be very subtly smiling you just have to relax them. You won't look like you're smiling, but people will notice that you're relaxed, and then how they treat you reflects how you being relaxed makes them feel about you, which is generally for the best (unless a situation calls for you to be not relaxed!).
A few things come to mind:
- Doing a lot of self work, therapy, and meditation to overcome various fears and anxieties that were not only big drags on my life but also got in the way moment to moment to make it hard to just do things rather than be seized with worry about how it could go wrong
- Working on things I care about. I find it hard to be motivated when I don't value the outcomes of my work. Finding work aligned with my motivations makes me way more productive.
- Lots of tools and systems to get the tactics of productivity right. This can be a rabbit hole, but the two books I found most useful were Getting Things Done and 7 Habits of Highly Effective People.
First, some commentary.
The rationalist community is fairly young. Although there are a few folks in their 50s and 60s, most folks are under 40, as you've noticed. I've only recently joined the over 40 crowd, and there are some effects that make us less present than younger folks:
- we have more established lives that pull us away from in person rationalists events
- we are more likely to already have fulfilling relationships (romantic and friendly) that reduce the pressure to attend parties to meet more people
- we have less energy for staying up late at house parties
This means we show up less often to things compared to younger folks. That said, we're not totally absent. I manage to show up to things from time to time. Here's where I've most noticed folks in the 40s and 50s attending:
- ACX meetups
- Winter Solstice
Aside from those, as others have noted, rationalists have overlaps with other communities, and many rationalists, as they get older, seem to spend more time with those communities rather than attending rationalists events. Stuff like that includes things in dance, kink, poly, queer, and authentic relating events.
note to self: add in some text here or in the next chapter that's more explicit about about how we can have abstractions that agree by glossing over enough details to enable robust agreement at the cost of being able to talk precisely about fine grained details of the world.
It's up to you, but to be frank I'm also frustrated with how you've approached this discussion. You've only provided a vague and extensional explanation of what you mean by regret, whereas I've provided more detailed models of what I'm thinking that you've objected to by roughly saying "idk doesn't match my intuitions", which is not something we can build an actual productive discussion on unless you dig into your intuitions more and provide of model for why they are useful.
The only difference between these two counterfactuals is how likely we should believe each one could have happened. There is a non-zero probability you could have gone to Alpha Centauri last night, just as there is that you played Baba is You. They don't have the same probability of having happened and one is much more likely than the other, but they are still of the same kind.
This doesn't seem like anything different than what I said. Everything you're talking about is a counterfactual things someone could have counterfactually done but didn't.
Sure, I totally get why people typically say "I didn't do my best". It's a recognition of a counterfactual world that wasn't realized: if they had counterfactually done something else they reckon that the outcome would have been better than what they got. It also expresses a wish, something like "I wish I had done something else that had a better result". And there's also a bit of something that I'd consider reasonable: an expression that you didn't do as well as you'd expect against your base rate of performance in similar situations.
I also get that saying this is psychologically necessary for many people given how they reason about counterfactuals, and that they wouldn't actually get better at things unless they were able to think about counterfactual pasts and pretend that they could have happened.
Thus I don't actually think we shouldn't keep around this meaning! Many people need it in order to live their lives. My point here is if you hold to this meaning of regret then you're confused about causality.
Or you think it's a niche meaning, such that someone who says "I didn't do my best" is more likely to be confused about determinism than to be using that meaning?
Roughly, yes. "[N]iche" doesn't quite make sense here. I might say it's more like the naive meaning. If you really get that the things couldn't have been otherwise you'd say something different from "I didn't do my best" like "next time I want to do better" or "I have a lot to learn" that expresses roughly the same meaning as counterfactual "regret" but also making it clear that in that past instance it wasn't possible to have done anything differently.
That someone would say that they did their best is considerable evidence that they are confused about determinism because if they actually understood it they would know that they couldn't have done anything else, and thus remarking that they didn't do their best suggests they still believe the world could have been otherwise.
I'm not sure that they're likely to get more confused about agency since it seems like nearly everyone is confused about just how to define agency in a way that both conforms with intuitions and doesn't rely on things like metaphysical claims to free will.
From a broad policy perspective, it can be tricky to know what to communicate. I think it helps if we think a bit more about the effects of our communication and a bit less about correctly conveying our level of credence in particular claims. Let me explain.
If we communicate the simple idea that AGI is near then it pushes people to work on safety projects that would be good to work on even if AGI is not near while paying some costs in terms of reputation, mental health, and personal wealth.
If we communicate the simple idea that AGI is not near then people will feel less need to work on safety soon. This would let them not miss out on opportunities that would be good to take ahead of when they actually need to focus on AI safety.
We can only really communicate one thing at a time to people. Also, we should worry more about tail risks a false positives (thinking we can build AGI safely when we cannot) than false negatives (thinking we can't build AGI safely when we can). Taking these two facts into consideration, I think the policy implication is clear: unless there is extremely strong evidence that AGI is not near, we must act and communicate as if AGI is near.
The second. In fact, it's bigger than that. Everything that optimizes is always doing its best, and also its worst, and its median, etc.
This phrase is meant to point out the contradictory nature of the idea of "best" that we strive for. Subjectively, we reason about causality and see counterfactuals that were worse and so judge what's best against those counterfactuals. But there's also an absolute sense of "best" that is meaningless because we're not actually in control but rather observers within the universe seeing how things play out and identifying with some chunk of the universe that's heavily intertwined with the process that produces our observations (even this is a bit wrong because it's making some implicit claims about what this observer is that I wouldn't endorse, but I think this is a useful enough shorthand for this explanation).
Thinking about the subjective, counterfactual best is sometimes useful as a side effect of how we figure things, but in the context of speaking about how the feeling of regret is optional it makes sense to emphasize the latter sense in which best/worst/etc. has no real meaning.
I share your frustration with voting at times. I know and endorse the stated voting norms. I'm also reasonably confident that plenty of people engage in yay/boo voting at times, especially around particularly touchy subjects. In the same way I'd expect to get heavily downvoted on the fictional Blessed Wrong (Less Wrong for Christians) if I posted about how there's no God unless I was extremely careful about how I did it, I expect to get downvoted on Less Wrong if I say something that goes against commonly held beliefs of readers unless I'm really careful about how I do it.
As a result, I have a hard time knowing what feedback to take from upvotes and downvotes. Sometimes I get upvoted because I pressed an applause button. Sometimes I get upvoted because I wrote an actually good post. Sometimes I get downvoted because I pressed a boo button. And sometimes I get downvoted because my post wasn't very good for some specific reasons I should fix. There's generally not enough bandwidth in the voting signal to figure out what case I'm in unless someone takes the time to write a comment explaining why they voted how they did.
It's perhaps worth noting that, whatever internal function people use to decide how to vote, the effect of their votes is to make up-voted things more visible and make down-voted things less visible on the margin, and this sends a feedback signal to incentivize writing things that get up-voted to the extent you want your writing to be seen by more people.
Under this regime, rate limiting seems fair to me, as it's an extension of controlling how much marginal content you produce can be seen. I'm not sure if it's perfectly well tuned, but seems likely that it's having a useful effect of causing there to be fewer things on Less Wrong that Less Wrong readers don't want to read, and even as salty as I get about it when I think my own insights are under-appreciated or misunderstood, I still accept that's just part of how Less Wrong has to work given that, at least for now, it's got to determine quality based on the aggregation of low-bandwidth signals.
My motivation with these comments is to push back against claims that regret is necessary or mandatory. I agree that regret is pretty useful for most people, and I wouldn't recommend they give it up unless it just falls away (and insert obvious caveat about psycho/sociopaths). But I also think it's worth knowing that feeling regret is not necessary to live a fulfilling life, because not knowing that it's not necessary creates a roadblock where people cling to their regret long past their need for it.
It might help to say a bit more about what my experience of regret is. It used to be a feeling, like a burning sensation of fear and anxiety, like the fear that I'll be abandoned by friends and family for screwing up. Now, after doing a bunch of meditation and other stuff to come to terms with the world as it is, I see regret like a negative number on a spreadsheet: useful information to update and act. This is different enough of an experience that I think it's better not to call it regret, since that seems likely to lead to more confusion, since I think most people have strong associations with what the feeling of regret is like rather than what the accounting to regret is like.
It might also help to know that I see regret as separate from remorse. Whereas regret, when felt, might be taboo'd as "clinging to counterfactuals" in its various forms, remorse is more like sadness that the world is as it is and that my best did not produce a better outcome. It'd take a lot of equanimity not to feel remorse, but it takes surprisingly little equanimity to not feel regret.
[...] regret is mandatory [...]
Knowing that you could have made a better choice is an act of feeling regret for the choice you did make.
I dispute the claim that regret is mandatory in most senses of the word.
I'm specifically saying that I could not have made a better choice because I already made the best possible choice given the circumstances, so there is nothing to regret other than the sort of "regret" that I did not counterfactually maximize expected value.
Behind this claim about regret is another: the universe is subjectively deterministic (the universe looks deterministic from the view point of an observer, and any appearance otherwise is due to uncertainty rather than free will or randomness). This claim allows us to avoid making any metaphysical (and thus unprovable) claims like that the universe is really deterministic, that free will exists, or that counterfactuals are real (as opposed to constructs to support the reckoning of causation).
I don't feel regret. I used to, but with a lot of work I came to trust that I'm always doing my best, so I can never have regret because I couldn't have done anything else. I can't even regret that my best was not enough because if I could have made a better choice when I had to make the choice I would have. No point getting worked up over things beyond my control. Just learn from it and do better next time.
That said, you can't just force your way to this position, and doing so would probably be bad. Regret, as you note, is useful for a lot of people. It's load bearing for them, part of the complex tangle of confusion and disfunction that's been carefully balanced to keep their lives going. It's not a great state to be in, but it works, and it's scary to imagine pulling out regret because doing so without fixing a lot of other things first would make the whole system collapse by removing a key source of motivation.
When I think back on my time in high school, here's the things I wish I had done:
- taken the GED to graduate early (probably at 15 or 16)
- slogging through high school was not clearly worth it for me; I likely could have gotten an earlier jump on my life if I had skipped 2-3 years of it
- gotten a job so I could buy a car
- not having a car really restricted what I could do with my life, and getting one earlier would have given me greater autonomy and thus ability to control more of how I spent my time
- socialized more
- I was very focused on my studies and getting into college; I should have spent more time hanging with friends and going on dates
- gotten a job
- perhaps unique to my time and situation, I could have easily gotten a job programming or IT
- skipped college?
- this one is tough; I would likely be financially better off if I hadn't gone to college, but maybe I wouldn't have had access to the same job opportunities and wouldn't have learned all the things that have enabled to me to really engage with this community
One of the tough things about advice is it's extremely context dependent, and for every bit of good advice you hear, someone needs to hear the opposite advice. So you'll have to think about this and if it applies to you, and probably get it wrong. But that's okay, have no regrets, because you can always only do the best you could at the time.
I've lived in Berkeley, Oakland, and now San Francisco. Can confirm this is roughly true.
The best way to get integrated into the community is to live in a group house. If you're moving to the Bay you might have to do this anyway for financial reasons. Most of those are in Berkeley or northern Oakland.
When I lived in downtown Oakland I was already pretty far away from people (could no longer walk to my friends' houses), and I'd less often bump into folks in the community on the street.
Now I live in San Francisco. Thankfully I have access to a car, because that makes it reasonable to come to things in the East Bay. However because I don't live in a group house anymore and don't live in Berkeley, I'm definitely less plugged in to what's happening. Lots of new folks have come to Berkeley and I don't really know them. Partly this is because I'm getting old and I'm already happy with my friends and don't need to find a bunch more so am less incentivized to spend time socializing to get more friends, but also because I'm just not organically meeting new rationalists as often as I used to.
Nice! Your concluding paragraphs bring to mind the various sorts of map-territory confusions we get ourselves trapped in and how this causes other downstream confusions. I think, especially as a student, it's really easy to fall into a trap of thinking physics is the map because that's what you learn in class. Even though you know in theory that the map describes the territory of reality, you spend so much time just trying to make sense of the map that it's easy to get lots in it and forget it was ever supposed to point at anything other than itself.
I think we see a similar phenomenon in other fields, so this is not unique to physics, but physics and your post in particular make the prevalence and ease with which we can find ourselves slipped into map-territory confusion clear.
The idea of a religion is that it mainly depends on faith and belief. However, when you KNOW that your religion is true, you throw the belief part out of the window and replace it with certainty. Therefore, you don’t need millions of people to share the same beliefs as you, because it is no longer a belief; it’s knowledge that you possess.
Many, maybe even most, religions care little what you believe and care much about what you do. Abrahamic religions are somewhat unique in caring a lot about beliefs, and even that is not universal there, since many forms of Judaism are happy to include atheist Jews so long as they observe practices.
The meaning of faith itself is having a strong belief in something that cannot be proven by empirical evidence.
This is a meaning of faith but not the only one. The type of faith you're talking about is common within Christianity and Islam, but other notions of faith & trust within religions exist.
Unclear if the rest of your arguments follow since the premises need more caveats, thus limiting the universality of what you seem to be trying to claim later in the post.
I think that there's a process we can meaningfully point to and call qualia, and it includes all the things we think of as qualia, but qualia is not itself a thing per se but rather the reification of observations of mental processes that allows us to make sense of them.
I have theories of what these processes are and how they work and they mostly line up with the what's pointed at by this book. In particular I think cybernetic models are sufficient to explain most of the interesting things going on with consciousness, and we can mostly think of qualia as the result of neurons in the brain hooked up in loops so that their inputs include information not only from other neurons but also from themselves, and these self-sensing loops provide the input stream of data that other neurons interpret as self-experience/qualia/consciousness.
Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?
Yes. Dualism is deeply appealing because most humans, or at least most of humans who care about the Hard Problem, seem to experience themselves in dualistic ways (i.e. experience something like the self residing inside the body). So even if it becomes obvious that there's no "consciousness sauce" per se, the argument is that the Problem seems to exist only because there are dualistic assumptions implicit in the worldview that thinks the Problem exists.
I'd go on to say that if we address the Meta Hard Problem like this in such a way that it shows the Hard Problem to be the result of confusion, then there's nothing to say about the Hard Problem, just like there's nothing interesting to say about why ships never sail off the edge of the Earth.
It sounds like Seth's position is that the hard problem of consciousness is the result of confusion, so he's not ignoring it, but saying that it only appears to exist because it's asked within the context of a confused frame.
Seth seems to be suggesting that the hard problem of consciousness is a bit like asking why don't people fall off the edge of the Earth? We think of this question as confused because we believe the Earth is round. But if you start from the assumption that the Earth is flat, then this is a reasonable question, and no amount of explanation will convince you otherwise.
The reason these two situations look different is that it's now easy for us to verify that the Earth is not flat, but it's hard for us to verify what's going on with consciousness. Seth's book is making a bid, by presenting the work of many others, to say that what we think of as consciousness is explainable in ways that make the Hard Problem a nonsensical question.
That seems quite a big different from "simply ignoring the Hard Problem", though I admit Jacob does not go into great detail about Seth's full arguments for this. But I'd posit that if you want to disagree with something, you need to disagree with the object-level claims Seth makes first, and only after reaching a point where you have no more disagreements is it worth considering whether or not the Hard Problem still makes sense, and if you do then it should be possible to make a specific argument about where you think the Hard Problem arises and what it looks like in terms of the presented model.
Note to self: work in a reference to a book on cybernetic models of mind, e.g. something like Surfing Uncertainty.
Or maybe this book: https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness
My understanding is that since the Civil War the interpretation of the 10th Amendment is that states retain powers not because they are independent states participating with the federal government, but are explicitly subject to federal authority and maintain their powers only insofar as the constitution protects them and the federal government doesn't move to take them by rule of law. Prior to the end of the Civil War it seems to have generally been held that states were independently associated with the United States and could leave, and we fought a war to assert that they could not. This makes it clear that any powers they have are because the federal government lets them have them rather than the other way around.
Why devolution? Because the thing I'm describing is just what the word means: "the transfer or delegation of power to a lower level, especially by central government to local or regional administration."
It's always been the case that the US is more devolved than typical countries, but the constitution is quite clear that power is devolved to the states from the federal government, and not that the federal government is granted power at the behest of the states (as was the structure of, say, the articles of confederation).
I agree with you that I think devolution is generally a good thing. Policies don't work well when they try to address the needs of too many people with too differing of needs and desires. Most of our problems today come, in my estimation, from overreach by the federal government and fights over what that overreach should do. This does mean accepting some undesirable outcomes, like allowing states to enact policies I disagree with, but I see this as the price of peace. Thankfully the US Constitution is designed to enable such a system, and I expect we'll naturally fall back on it if a strong national government becomes more than people will tolerate.
I see the Patriot Act as a grab to expand the reach of existing powers rather than a move into new powers. The powers it granted mostly fit within the existing constitutional framework of what the federal government is allowed to do rather than asserting new powers. I see this as meaningfully different, as the federal government could have enacted something like the Patriot Act at almost any time after WWII but never chose to, likely because it was technologically infeasible to carry out.
It also wasn't a power grab against the states so much as a power grab against the privacy of individual citizens, and the Supreme Court has made it clear on multiple occasions that there is no general right to privacy, only privacy as a consideration.
I've noticed this and been worried, but there's one thing about the US which makes me slightly less worried, which is that the US has a natural release valve for a lot of these issues if people are finally forced to pull it, and that's devolution.
The US used to be more devolved, and that time is still technically within living memory. The federal government grew tremendously in power between 1930 and 1990, which most of the growth happening in the years between 1940 and 1970. This reflected the needs of the country given the times it found itself in: the Great Depression, then WWII, and then the Cold War.
But since roughly 1990 the federal government hasn't made big power grabs. Mostly this is because it's already made all the useful grabs to make and has mostly been expanding its power within the domains it already controls, but also because there's a contingent of folks in the US who continue to support the right of the states to govern themselves without federal interference. Although "states rights" is right coded, it's in practice employed on the left as well. On the right this is used for things like limiting taxes, restricting abortion, and protecting gun ownership. On the left for raising taxes to support more generous social services, permitting federally-prohibited drugs by hampering enforcement, and restricting access to guns.
If faced with a crises at the federal level, the natural move would be for the US to devolve powers back to the states. Although states are not as robustly independent as they were 100 years ago, because they are so heavily involved in the day-to-day administration of the state they would be able to carry on and rebuild local capacity to make up for lost federal assistance in several years.
Devolution seems likely to me because any would-be dictator or just sufficiently hated president would, given the current political climate of the US, find that ~half the population doesn't like them and a sufficient majority of ~1/3 of the states would not like them enough to rebel in various ways. I don't mean a civil war, but more like kicking out federal agents and taking local control of services and institutions. This is likely to work because of an extremely strong norm within the military to not move against its own citizens or getting involved in politics, and the January 6th riot was an excellent demonstration of their continued restraint here.
Thus I expect that if political conflict at the federal level comes to a head, some of the states will just nope out. Not of the US entirely, but of permitting the federal government to exercise all powers that it's claimed in the last 90 years. And after a lot of noise, they'll basically be allowed to do it, same way this is being permitted today on a small scale with things like drug decriminalization, gerrymandering, and sanctuary cities.
I don't expect devolution to be painless, but I do expect it to be functional and for the US to come through surprisingly intact. Internally may look like things are falling apart, but on most measures we will barely notice that anything is happening.
Also a note to myself: I think I should swap out some of the discussion here for talking about Lakoff and metaphors.
I don't exactly disagree, but your way of talking about epistemology presupposes a certain understanding or framing of what truth and knowledge are and how they work, whereas the field generally is just about whatever it is that humans call by the names truth and knowledge, and it is the project of the field to figure out what they really mean.
The Tasmanians were not the only culture to develop these "maladaptations", but they were unique in that they were unable to re-learn their lost skills.
This line leaves me wondering about human isolation on our little planet and what maladaptations humanity is stuck with because we lack neighbors to learn from.
Honestly some version of it scales to any size: planet, country, city, neighborhood, etc.
Downvoted because I don't think the evidence presented here is strong enough to justify the claims made by, e.g. the title. After reading this I'm left with a lot of questions like:
- are we currently suffering a disaster caused by long influenza?
- are some countries locally suffering a disaster of long SARS?
- long AIDS?
- our systems seem to route around a large chunk of people not working, are we sure this would be a disaster?
- what makes us so sure long COVID will be uniquely bad?
At least personally I want to see posts on Less Wrong that, when they make bold, confident assertions like this post does, put in substantial effort to back them up OR make it clear that they are bold, speculative assertion. I'm judging this post based on its tone, and as a post in the category of bold, confident assertions, and the lack of evidence presented to back the assertions makes me dislike it.
Although it's not a strict enforcement mechanism, what about the ability to mark posts to be included in a robots.txt
file to disallow them from being scraped (assuming the bots respect it)?
This seems an important point. I have a measured IQ of around 145 (or at least as last measured maybe 15 years ago when I was in my 20s). My reaction times are also unusually slow. Some IQ tests are timed. My score would come in a full 15 points lower (one standard deviation) on timed tests.
You might complain this is just an artifact of the testing protocol, but I think there's something real there. In everyday life I'm a lot smarter (e.g. come up with better ideas) when I can sit and think for a while. When I have to "think on my feet" I'm considerably dumber. The people I meet who feel significantly smarter than me usually feel that way because they can think quickly.
I've even gotten complaints before from friends and coworkers wondering why I don't seem as interesting or smart in person, and I think this is why. I'm not quite sure how to quantify it, but on reaction time tests I'm 100+ms slower than average. Maybe this adds up to being able to think 1-2 fewer thoughts per second than average. Obviously this is a difference that adds up pretty quickly, especially when you're trying to do something complex that requires a lot of thinking and less crystalized intelligence.
I'm writing a book, and I'm very cognizant of these issues as I write. It's academic nonfiction, so basically the worst of the worst. I don't read these sorts of books myself normally and prefer to find summaries. There's only rare books in this genre where reading the whole thing is worthwhile, and that's mostly because it's a way to get you to think in depth about something.
So what am I doing in response as I write? I'm keeping the book dense and short. I'm aiming for <50k words and am more likely to hit 30k. That means it has higher density than typical academic nonfiction.
Why write a book, though? I am posting the draft chapters here as I write them. But books have a few nice properties:
- Perceived by others as prestigious so they are more willing to trust you might have something worthwhile to say.
- The format forces a different style of writing that has to be more self contained and can't lean as easily on references to explain things.
- Even though I could explain the main ideas of the book in just a few paragraphs, some of the arguments take time to develop if you disagree, so the book in part serves to develop those arguments and make the case in enough detail that more people will be convinced.
Are you familiar with the idea of masks and mask work? Most of what I know comes from Johnstone's Impro (here's a decent online review/summary), but the basic idea in theater comes from folk religion, where people wear masks and go into trances to temporarily become someone else. Traditionally humans used mask trances to embody what they believed were gods and spirits. Now people use mask trances to play characters on the stage.
What does this have to do with anything? Although the mask trance is an extreme form, there's a sense in which we can wear masks to become different people in different situations. Most people do this a little bit, e.g. behave a bit differently with different people, and some people do it a medium amount, e.g. code switching, but you can learn to do it a lot.
This gives you the option to put up stronger boundaries between the person you are at work and home (and in other situations). Even just knowing about masks can help make it clearer how you can put down the mask you wear at work, say, when you get home, maybe by having a little ritual where you "take off your mask". For example, think of Mr. Rodgers changing into his sweater when he "came home" and changing out of it when he "left home".
It's not exactly perfect, but it's a technique that may help you deal with the transition between different contexts where you need to present a different persona to the world.
A few thoughts on this.
This posted reminded me of Eliezer's take against toolbox style thinking. In particular, it reminded me of the tension within the rationality community between folks who see rationality has the one thing you need for everything and folks who see it as an instrumentally useful thing to pull out in some circumstances.
The former folks form what we call the Core Rationalists. Rationality might not be literally everything, but it's the main thing, and they take an expansive view on the practice of rationality. If something helps them win, they see it as being part of rationality definitionally because it helps them win. This is also where the not-so-straw Vulcan LARPers hang out.
The latter group we might call the Instrumental Rationalists. They care about rationality to the extent it's useful. This includes lots of normal folks who got interested in rationality because it seemed like a useful tool but it's not really central to their identity the way it is for Core Rationalists. This is also the group where the Post/Meta-Rationalists hang out, who can think of as Core Rationalists who realized they should treat rationality as one of many tools and seek to combine it with other things to have a bigger toolbox to use to help them win.
Disagreements between these two groups show up all the time. They often play out in the comments sections of the forum when someone posts something that really gets at the heart of what rationality is. I'm thinking about posts from @[DEACTIVATED] Duncan Sabien, comments from @Said Achmiz, whatever @Zack_M_Davis's latest thing is, and of course some of my own posts and comments.
Perhaps this disagreement will persist because there's not really a resolution to it. The difference between these groups is not object level rationality, but how to relate rationality. And both of these groups can be part of the rationality movement even if they sometimes piss each other off because they at least agree on one thing: rationality is really useful.
I think, based on this reply, you basically get my point, we're just quibbling about some details.
I take this sort of hard line stance on "objective" because surprisingly many people, when pressed, turn out to be naive realists, including a whole bunch of rationalists I've interacted with over the years. So if I seem maximally uncharitable it's because there's a bunch of folks out there who are failing to grasp the point I make in this point under any terms.
Humans got trained via evolution alongside a bunch of dumber animals. Then we killed a lot of them.
Evolution doesn't align with anything other than differential reproduction rates, so you'd somehow have to make the only way to reproduce to be aligned with human values, which basically sounds like solving alignment and then throwing evolution on top for funsies.
Just made it up by extrapolating from the dath ilan examples I've seen.
1. There's (at least potentially) a distinction between what something means and what actually makes people say it. I think you are saying that what makes people call things objective is the presence of good intersubjective agreement, and that actually e.g. physics is not more "objective" than art but merely seems so because it has good intersubjective agreement. Is that right?
Yes, for reasons that might seem obvious after I answer the next question.
2. If so: what exactly do you mean by "objective"? Like some other commenters here (tailcalled, TekhneMakre) I am concerned that you're defining "objective" in a way that makes it (fairly uncontroversially) not apply to anything, and it seems to me that there are plausible ways to understand "objective" that make it apply more to things commonly thought of as objective and less to things commonly thought of as subjective, in which case I think that might be a better way to use the word. But I'm not sure, because I don't know quite what you mean by "objective". (It seems like you mean something with the property that "theories of physics are in our heads" implies "physics is not objective", for instance. But that doesn't really nail it down.)
I generally think we should taboo objective because I don't think there's agreement on the definition. I have two definitions in mind, and I think there's a motte and bailey situation going on with them.
Definition 1: not dependent on a mind/observer for existence
Definition 2: stuff that seems to be the same for all known observers
Definition 1 is something like the strong version of "objective". Definition 2 is a weak version that's equivalent to a definition for "intersubjective consensus".
Definition 2 is the thing that's defensible, but Definition 1 is what some people want to mean by "objective", yet nothing exists independent of minds because existence is a property of ontology (the map) not reality (the territory). I say more about this fine distinction between existence and being here.
3. When I say "physics is objective" (actually I would generally not use those words, but they'll do for now) what I think I mean is something to do with physics being grounded in the external world, and something to do with my opinion that if aliens with very different mental architecture turned up they would none the less have quite similar physics, at least to the extent that it would make similar predictions and quite likely in its actual conceptual structure, and really not very much to do with intersubjective agreement. Do you think I am just deluding myself about what's going on in my head when I say that physics is more objective than art, and that actually all I'm doing is comparing levels of intersubjective agreement? Or what?
- (I do think that intersubjective agreement is relevant. The way it's relevant is that what-I'm-calling-objectivity is one possible explanation for intersubjective agreement, so strong intersubjective agreement is evidence of what-I'm-calling-objectivity. But it's not the only possible explanation, and it's far from being proof of objectivity, and it certainly isn't what "objectivity" means.)
I think somehow you've come to believe there is evidence to suggest there's an external reality and you're drawing conclusions about other things based on having assumed there's an external reality independent of you as an observer.
For comparison I would use reality/"the world" to point directly to experience. Anything else we think we know is known only through that experience, and that includes any claims we might make to the existence of external reality. But in an important sense external reality, however real it seems, is not real because we only know about it indirectly as mediated by our experience and thus its existence is a claim not an assumption.
In dath ilan they don't teach history. Instead children are kept willfully ignorant about the past, and adults coordinate to make sure children don't learn about historical events too early. Then, each year in school, they play in prediction markets to identify which historical events actually happened. "History" is not studied so much as used as training data used to get better at making accurate predictions under uncertainty.
No. I've become very physically aware of how caffeine borrows from my body's resources. There's always a need to recover later. After all, there was a reason my energy wasn't that high in the first place! So that reason comes home to roost.
This seems intuitive, but I'm a bit suspicious based on the use of stimulants to treat a broad range of conditions like ADHD that we might generically think of as conditions where one is persistently understimulated relative to your body's desired homeostatic level of stimulation. For these people caffeine and other stimulants seem to just treat a persistent chemical misalignment.
Further thoughts:
- but maybe persistent understimulation isn't just a chemical condition but something that can be influenced heavily by, say, meditation and may for many people be the result of "trauma"
- perhaps not all stimulants are created equal and caffeine in particular behaves differently
- the phenomenon you're describing might still make sense in light of this if it's just describing what stimulants are like for folks who aren't significantly below their homeostatic stimulation target
However I'm speculating from the outside a bit here, since I've never had to figure this out. My desired level of stimulation is relatively low, and I can get overstimulated just from too much sound or light or touch, so stimulants are really unpleasant, so I generally stay away from them because the first order effects are bad, so I've not personally had to explore the second order effects, thus I'm merely inferring from what I observe about others.
I think the key here is whatever reality really is, it's laws and parameters aren't controlled by anyone, and thus it's useful to reliably say that this reality is objective.
This is a reasonable supposition to make, but as I point out in another comment, we only know that its laws and parameters aren't controlled by anyone insofar as we've only thus far seen evidence to suggest that. And that fact that tomorrow we could obtain evidence that suggests otherwise means there's a subjective layer between us and making a claim like "reality is objective", and thus any claims to objectivity are necessarily subjective claims, and because of what objectivity means, this disqualifies a strong claim to objectivity and permits only a subjective, contingent claim to it.
I think that's generally correct, although a bit beyond the intended scope of this post. There's no view from nowhere, no actual observer-independent information we can obtain, so any perceived objectiveness is contingent upon the subjective evidence we have about such things and we cannot be certain they are objective. Due to this lack of certainty I think it is better to just taboo the idea of objectivity and think in terms of "things that no observer has yet found sufficiently strong evidence to disagree with".