Posts
Comments
(this comment is copied to the other essay as well)
I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one's mind; it can only be reached by way of practice.
With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she's not in ratspaces whatsoever and as far as I know has no idea people hold these beliefs. I think that would be a better way to live; perhaps an unpopular opinion on the website where people try not to live in ignorance. It's hard not to be resentful sometimes. I resent the AI researchers, the site culture, and I especially resent certain MIRI founders and their declarations of defeat.
I think that means I need to disconnect, once and for all. I've been toying with the idea that I need to disconnect from the LW sphere completely and frankly I think it's overdue. Dear reader; if you aren't going to go solve alignment, I suggest you consider following suit. I might hand around a bit to view replies to this comment but... Yeah. Thanks for all the food for thought over the years LW, I'm not sure if it was worth it.
(this comment is copied to the other essay as well)
I respect the attempt, here, and I think a version of the thesis is true. Letting go of control and trying to appreciate the present moment is probably the best course of action given that one is confronted with impending doom. I also recognize that reaching this state is not just a switch one can immediately flip in one's mind; it can only be reached by way of practice.
With these things in mind, I am still not okay. More than anything I find myself craving ignorance. I envy my wife; she's not in ratspaces whatsoever and as far as I know has no idea people hold these beliefs. I think that would be a better way to live; perhaps an unpopular opinion on the website where people try not to live in ignorance. It's hard not to be resentful sometimes. I resent the AI researchers, the site culture, and I especially resent certain MIRI founders and their declarations of defeat.
I think that means I need to disconnect, once and for all. I've been toying with the idea that I need to disconnect from the LW sphere completely and frankly I think it's overdue. Dear reader; if you aren't going to go solve alignment, I suggest you consider following suit. I might hand around a bit to view replies to this comment but... Yeah. Thanks for all the food for thought over the years LW, I'm not sure if it was worth it.
It's the "Boy who cried wolf" fable in the format of an incident report such as what might be written in the wake of an industrial disaster. Whether the fictional report writer has learned the right lessons I suppose is an exercise left for the reader.
My advice would be this:
Trying to meet people for the sole purpose of dating them is a spiritually toxic endeavor, with online dating being particularly bad. I had a handful of girlfriends before meeting my wife, none of whom came to me through online dating or trying to get dates with people I didn't know.
I contend that the best path to a relationship is through community, broadly defined. What you want is to be around people with whom you can cultivate compatibility. The online dating/cold approach model relies on being able to quickly discern compatibility, which I think most people are kind of bad at.
My definition of community in this context is any circumstance that lets you repeatedly interact with people in a non-targeted way. For meeting my wife, that was a weekly bar trivia night. For past partners it was a mix of extracurricular activity groups and friends-of-friendgroups. These environments accomplish a handful of things at once; they establish shared background and positive memories, they let you display and observe positive traits that are difficult to signal on a dating profile or on a date (e.g. patience or thoughtfulness), and ideally they're intrinsically worth existing in for their own sake. That last one is important because, to use my first example, even if I hadn't met my wife playing that bar trivia game, I still would have had fun going and I made other friends along the way.
You can't just show up and expect things to fall in your lap, of course. You do want to be improving yourself and putting your best foot forwards. Not all communities are created equal so once in a while you need to step back and evaluate if you need new opportunities in your life. And obviously, you still have to be ready to actually ask someone out eventually.
It's not easy, necessarily, but it did work for me.
You're not wrong. Learning to crimp really does enable climbers to perform feats that others cannot, and plenty of them suffer injuries like the one I've linked to and decide to heal and keep going. My addendum isn't "never do something hard or risky," it's "pain is a warning; consider what price you are willing to pay before you go pushing through it."
Addendum: Crimp grips are a major cause of climbing injuries. It's sheer biomechanics. The crimp grip puts massive stress on connective tissues which aren't strong enough to reliably handle them.
The moral of the addendum: choose your impossible challenges wisely; even if you can overcome them the stress and pain might have been a warning from the beginning. If nothing else it should be a warning to get some good advice about prevention or you may find yourself unable to pursue your goal for weeks at a time.
It's going to be tricky. You may already be too close to the situation to judge impartially, and a case study is going to be difficult to use as evidence against population-level surveys of well-being, especially for your implied time horizon. You could attempt to benchmark against previous work, e.g. see what the literature has to say about the effects of poverty on diet, educational attainment, etc. in first-world cities, but your one new data point still won't generalize and it wouldn't be doing the heavy lifting in your argument for localism at that point.
Unless I'm very much mistaken, emergency mobilization systems refers to autonomic responses like a pounding heartbeat, heightened subjective senses, and other types of physical arousal; i.e. the things your body does when you believe someone or something is coming to kill you with spear or claw. Literal fight or flight stuff.
In both examples you give there is true danger, but your felt bodily sense doesn't meaningfully correspond to it; you can't escape or find the bomb by being ready for an immediate physical threat. This is the error being referred to. In both cases the preferred state of mind is resolute problem-solving and inability to register a felt sense of panic will likely reduce your ability to get to such a state.
I think I see your point but I'm not sure how to answer the question as you posed it so let me make an analogy:
Imagine I come to you and say "I have a revolutionary new car design that will revolutionize the market and break the chokehold of big auto! Best of all, it's completely safe; the locks are unpickable and the windows are unbreakable, so no one will ever be able to mug you in your car!"
You would be wise to ask "Okay, but what about in a crash? Is it safe there?" and the truth would be no, not really. Actually people who get caught in crashes with this car are in more danger. And car crashes are so much more common than people being mugged in their car that if you do the math my new car substantially increases your risk exposure.
So when I'm talking about the status quo, I'm talking about the risk landscape that the technology needs to be robust against. Normal currency is subject to a lot of social engineering attacks and very few technical ones, so crypto is solving the wrong problem because as thousands of stolen Bored Apes can attest, you can still be easily tricked into giving the wrong people your wallet details. Although the normal currency world is far from perfect it does at least have recourse sometimes, like transaction reversals or legal redress.
I'd say that's a good point but perhaps doesn't exhaustively cover all the problems. The way I've come to think about crypto which I think is roughly congruent is that the things crypto is good for (decentralization, security against hacking) are not major vectors of attack by bad actors under the status quo, and the things it isn't robust against definitely are. (social engineering, obfuscation of value)
This can be true but it varies a decent amount with expectations I think. As my friends get older and more of us have kids to think about it's becoming more normalized to have a mix of sobriety levels at what would have once been drunk parties.
Any industry with public exposure is going to run into problems. Take retail; having the store open at all possible profitable hours is much more important than having a full complement of staff at any given moment. My job is only adjacent to retail but even so, having a whole team go on vacation would put the supply chain on pause. That move might technically be possible with advance planning but it would have major impacts on throughput.
I think any sector that relies on moving physical matter (including people) through space is a bad candidate because you're often dealing with a cap on the amount of effective capacity your capital has, so having a team of people go dark will lead to underutilization. Consider maintenance jobs, shipping, manufacturing, food, emergency services, transport, and I'm sure a dozen others.
The thing you've outlined sounds to me like news media, sort of, as well as implicitly leaning on existing news media. The amount of information entailed is comparable; having up-to-date info on over 3000 United States counties is a far from trivial endeavor.[1]
It's different of course in that existing news media isn't remotely incentivized to support this kind of work, instead being caught in the tar pit of getting eyeballs and ad dollars, as well as being an arena which monied interests know they need to optimize for. Of course if the tool you're describing became well-known, it would also become subject to competitive pressures from without.
And in practice, the number of people who would get value from it is probably not all that much different from the number of people who already are already immersed in activism. You get marginal gains from more efficient allocation of some of the ones who are just kind of being pulled along by their social networks.
Could a GPT-X in principle maybe help scrape through every local paper, every town council pdf, and output useful insight that current activist communities don't already have access to if they're sufficiently motivated? I think eventually yes, but by the time AI is that powerful there might be more important things to worry about.
- ^
If you want to offer info at the town level, it gets even worse. There are nearly 20,000 incorporated towns cities and villages, although 3/4s are under 5k population.
Yes, that's my point. I'm not aware of a path to meaningful contribution to the field that doesn't involve either doing research or doing support work for a research group. Neither is accessible to me without risking the aforementioned effects.
I feel like you mean this in kindness, but to me it reads as "You could risk your family's livelihood relocating and/or trying to get recruited to work remotely so that you can be anxious all the time! It might help on the margins ¯\_(ツ)_/¯ "
AI discourse triggers severe anxiety in me, and as a non-technical person in a rural area I don't feel I have anything to offer the field. I personally went so far as to fully hide the AI tag from my front page and frankly I've been on the threshold of blocking the site altogether for the amount of content that still gets through by passing reference and untagged posts. I like most non-AI content on the site, been checking regularly since the big LW2.0 launch, and I would consider it a loss of good reading material to stop browsing, but since DWD I'm taking my fate in my hands every time I browse here.
I don't know how many readers out there are like me, but I think it at least warrants consideration that the AI doomtide acts as a barrier to entry for readers who would benefit from rationality content but can't stomach the volume and tone of alignment discourse.
Aside from double-counting, here's a problem; you should have just set your starting priors on the false and true statements as x and 1-x respectively, where x is the chance your whole ontology is screwed up, and you'd be equally well calibrated and much more precise. You've correctly identified that the perfect calibration on 90% is meaningless, but that's because you explicitly introduced a gap between what you believe to be true and what you're representing as your beliefs. Maybe that's your point; that people are trying to earn a rationalist merit badge by obfuscating their true beliefs, but I think at least many people treat the exercise as a serious inquiry into how well-founded beliefs feel from the inside.
Is there a strong theoretical basis for guessing what capabilities superhuman intelligence may have, be it sooner or later? I'm aware of the speed & quality superintelligence frameworks, but I have issues with them.
Speed alone seems relatively weak as an axis of superiority; I can only speculate about what I might be able to accomplish if, for example, my cognition were sped up 1000x, but it find it hard to believe it would extend to achieving strategic dominance over all humanity, especially if there are still limits on my ability to act and perceive information that happen on normal-human timescales. One could shorthand this to "how much more optimal could your decisions be if you were able to take maximal time to research and reflect on them in advance," to which my answer is "only about as good as my decisions turned out to be when I wasn't under time pressure and did do the research". I'd be the greatest Starcraft player to ever exist, but I don't think that generalizes outside the domain of [tactics measured in frames rather than minutes or hours or days].
To me quality superiority is the far more load-bearing but much muddier part of the argument for the dangers of AGI. Writing about the lives and minds of human prodigies like Von Neumann or Terry Tao or whoever you care to name frequently verges on the mystical; I don't think even the very intelligent among us have a good gears-level model of how intelligence is working. To me this is a double-edged sword; if Ramanujan's brain might as well have been magic, that's evidence against our collective ability to guess what a quality superintelligence could accomplish. We don't know what intelligence can do at very high levels (bad for our ability to survive AGI), but we also don't know what it can't do, which could turn out to be just as important. What if there are rapidly diminishing returns on the accuracy of prediction as the system has to account for more and more entropy? If that were true, an incredibly intelligent agent might still only have a marginal edge in decision-making which could be overwhelmed by other factors. What if the Kolmogorov complexity of x-risk is just straight up too many bits, or requires precision of measurement beyond what the AI has access to?
I don't want to privilege the hypothesis that maybe the smartest thing we can build is still not that scary because the world is chaotic, but I feel I've seen many arguments that privilege the opposite; that the "sharp left turn" will hit and the rest is merely moving chess pieces through a solved endgame. So what is the best work on the topic?
This feels important but after the ideal gas analogy it's a bit beyond my vocabulary. Can you (or another commenter) distill a bit for a dummy?
Epistemic status: intuition and some experience, no sources.
Long-form profiles are mostly a waste of time. The two key weaknesses are 1) the primacy of photos and 2) adversarial communication
-
Tinder made millions by realizing that many, many users make snap decisions based on photos and the rest doesn't produce as strong of results. That's not to say no one reads the profiles and decides on them, but at least a minority do (and some will just stay anchored to their impression of the photos while reading anyways).
-
Dating is a "lemon market," where everyone playing wants to attract the highest-quality partner they can relative to their own status. That means when you write a long-form profile, you're incentivized to fluff it up and present yourself in the best possible light, and when you're reading a profile you need to take that into account when trying to evaluate quality. And even if you're honest, other readers will probably still apply the critical lens. This doesn't degrade the value of the long-form profile signal to zero, but it does discount the return on your time investment.
Now that I'm happily engaged I tend to think my pessimism on the value of online dating full stop was well-warranted and there's no substitute for in-person socializing, particularly in a group/community space where the adversarial presentation dynamics are reduced. If my current mind was thrown back in time to when I was 18 I'd skip online dating altogether and just get into more clubs and study groups and stuff in college.
While that's an admirable position to take and I'll try to take it in hand, I do feel EY's stature in the community puts us in differing positions of responsibility concerning tone-setting.
somebody not publishing the latter is, I'm worried, anticipating social pushback that isn't just from me.
Respectfully, no shit Sherlock, that's what happens when a community leader establishes a norm of condescending to inquirers.
I feel much the same way as Citizen in that I want to understand the state of alignment and participate in conversations as a layperson. I too, have spent time pondering your model of reality to the detriment of my mental health. I will never post these questions and criticisms to LW because even if you yourself don't show up to hit me with the classic:
Answer by Eliezer YudkowskyApr 10, 2022 38
As a minor token of how much you're missing:
then someone else will, having learned from your example. The site culture has become noticeably more hostile in my opinion ever since Death with Dignity, and I lay that at least in part at your feet.
Even an unknown member of parliament has still been tested against a competitive market, has at least met many or all the key power brokers, etc. They're much closer to the president end of the spectrum than the random citizen end.
To add to the consensus— if a random person actually had a favorable matchup against a career politician of any stripe, that would be a massive low-hanging fruit for existing political actors to capitalize on. The RNC and DNC would be falling all over themselves to present "average joe" candidates if doing so provided a consistent advantage. They're not, so it follows that either those organizations are both highly un-optimized for winning elections (probably not; too much money at stake) or else that the evidence doesn't bear out that they should.
I'm coming back to this thread having just seen the movie and really enjoyed it. femtogrammar's remarks about the emotional core of the movie partially resonate with me, in that there's a strong thread of making the active choice to live one's life as opposed to being swept along in it. I would elaborate that I think this is a movie about gaining perspective in the face of struggle. I also think the sci-fi action elements are quite effective in communicating this theme as well as being very technically well executed.
Basically, each act of the movie shows Evelyn in a different stage of awareness. When we meet her, she feels trapped and powerless in her life, unable to see past the framework she has constructed for herself of needing to please her father and manage her husband and daughter. The second act begins the sci-fi chicanery, showing her that she can be more than what she is if she can only break out of the mindset she is trapped in (do something you would never think to do, and you can become a different person, one who would think to do that!). With this she is able to begin exerting actual agency in her life; fighting back against people who are trying to control her.
Unfortunately the insight is incomplete, and as we move into act three, Joy confronts Evelyn with nihilism, and despite having gained agency, Evelyn still lacks purpose. She is defeated and "dies" across the multiverse to variously literal degrees. However her husband is able to help her find a breakthrough; in his own way he has created meaning for himself in kindness and she learns to follow his example. The action changes at this point and Evelyn defeats the grunts by helping them self-actualize as well so they no longer have reason to fight her instead of overpowering them. Finally, she is able to save Joy (the name is very apropos) by acknowledging that life will always contain struggle, but love makes it worthwhile to hold on.
So the thesis here isn't anything revolutionary, really. It's standard existentialist stuff. The reason I think it works so well for me and many other watchers is that the makers of this movie clearly intimately understand the emotional grammar of film. The constant cuts from one universe to another would make the movie totally unwatchable except that each one is carefully juxtaposed to be emotionally contiguous. The performances show a similar level of mastery, with facial expressions and body language being carefully translated from one shot to the next. Thanks to this care for continuity, even when the action scenes aren't obviously advancing the purely mechanical elements of the plot, they are still generally contributing to the emotional arc of the film by creating tension and giving Evelyn an obstacle to struggle against and gain self-knowledge in the process.
Visual elements are carefully incorporated as well. One example I liked is the karaoke machine receipt. It's established early and repeatedly appears in the frame. It has a heavy black circle on it which clearly echoes The Bagel, but it's not empty. At the center is the karaoke machine, which in my reading is a symbol of love. The very first shot of the film shows the family singing karaoke together and sharing a moment of happiness and love, reflected in the circular black mirror. We are introduced to the conflict when this vision snaps away and the mirror shows an empty table covered in receipts; a literal loss of perspective. This is just one motif but I think it shows the level of attention given to making each frame count; we get a symbolic representation of both the conflict and the resolution within second of starting the movie.
I've let this comment get much too long and I suspect you won't be swayed too much. Hopefully I've at least killed some of the mystery in my rambling. I liked the movie because it has a very clear emotional heart and uses a lot of technical prowess to deliver on that heart.
It seems to me like a big part of the picture here is legibility. Social and private boundaries are a highly illegible domain, and that state of affairs is in conflict with the desires of a society which is increasingly risk-averse. To stick with the language of this particular analogy, a successful benign violation for you is one that shows metis over the domain of "living with Duncan". On the flip side, the illegibility makes it harder for you to distinguish between malicious probing for weakness and innocent misjudgment, and for the other party to distinguish between "Duncan will be fine" and "Duncan will be a bit annoyed" and "Duncan will distrust or dislike me".
Unfortunately I think that means my takeaway is that this is a lesson that basically lives within each individual's personal sphere and doesn't generalize well. You can master the art of living with your own loved ones but probably not master living with everyone. You can say "the world will be better if we all get better at navigating this illegible territory," and you're right, but the how and when is left as an exercise for the reader.
I find this comment offensive.
First, your description of the process of consent is not universal; it doesn't describe any relationship I've been in going all the way back to when I was a teenager. At the very least this should tell you that this series of events wasn't acceptable because it's "just the way humans interact." Many men, including myself, actually talk to the women we want to have sex with, and "having lower amounts of sex" is far from an adequate reason to resort to the boundary-pushing and manipulation you describe.
Second, "the fact that you were raped doesn't make Alex a rapist" is a patently absurd position to hold, not to mention an incredible red flag for anyone who might consider being at all vulnerable around you in the future. The mental gymnastics required are mind-boggling. It appears the case you're making by saying "you were both naked on the beach, and I think a large fraction of men would at least try to escalate the situation" is something like "you were standing in traffic, you shouldn't be surprised you got hit by a car". This is textbook victim-blaming and completely ignores the perpetrator's agency in the matter. I would say it's akin to saying "you were eating a sandwich in public, and I think a large fraction of men are so hungry they would at least try to punch you and steal it." If "retreat mind-state" is the defense here then I guess those retreats should probably not be happening. If I took the series of actions described in the open letter on my own fiancee I would think she would be disturbed and traumatized by the experience, to say nothing of the more ambiguous context described above.
Third and lastly; regardless of what views you may hold in private, it's incredibly hostile behavior to make this case on a self-described assault victim's post about the incident. The whole comments serves to demean OP's perspective and you then condescend that "she can feel however she wants" as if you haven't just described at length why you think she's foolish and misguided.
I don't like this defense for two reasons. One, I don't se why the same argument doesn't apply to the role Eliezer has already adopted as an early and insistent voice of concern. Being deliberately vague on some types of predictions doesn't change the fact that his name is synonymous with AI doomsaying. Second, we're talking about a person whose whole brand is built around intellectual transparency and reflection; if Eliezer's predictive model of AI development contains relevant deficiencies, I wish to believe that Eliezer's predictive model of AI development contains relevant deficiencies. I recognize the incentives may well be aligned against him here, but it's frustrating that he seems to want to be taken seriously on the topic but isn't obviously equally open to being rebutted in good faith.
Three pillars; body, mind, environment.
Body - A varied diet including lots of plants and a mix of proteins— with 5 or 6 figures to spend a month you never need to eat another low-quality convenience meal. An appropriate exercise routine for the subject's level, incorporating at least some light strength training and adding modules as the habits become ingrained and sustainable (candidate exercises include yoga, jogging, crossfit, and kickboxing in no particular order). Sleep hygiene— 6-8 hours on a consistent schedule according to the subject's needs. Quit smoking entirely if applicable and reduce other recreational drug consumption with option to eliminate entirely if also needed.
Mind - Shop around for therapists; school matters less than rapport. Key skills to develop involve developing awareness of one's own emotions, plus the ability to nonjudgmentally moderate & process those emotions. Identify and excise any behavioral addictions; smartphones, social media, and some video games are high-level targets for excision. Meditation and mindfulness may be productive.
Environment - Ensure the subject has 1 to 3 close friends/loved ones to mutually confide in, plus several more casual friends. Connect with at least one local community (church is the archetypal answer but rec sports, bar trivia, meetup groups, clubs, volunteer orgs, or local gaming scenes can all be good choices) and make as many in-person appearances as the subject can comfortably handle. Ensure physical spaces are comfortable; helpful targets to look into will likely include tidiness, illumination, and ventilation. Minimize exposure to unwanted stimuli such as traffic noise.
Tie all of the above together with a healthy dose of slack and I think you'd be well on your way with most subjects barring clinical mental health issues.
I don't agree with this as a principle, although it may be a correct output. I think the notion of "a decent default" misses the mark compared to "think about your audience and the key elements of your message before deciding your form and tone."
To use a simple metaphor, if you need to anchor two pieces of wood together, a hammer and nails are usually going to be the quickest and cheapest way to do it. A drill and screws are often overkill. However I don't think that makes the hammer and nails the default; I think it makes them the correct tool in the majority of situations and the drill and screws the correct tool in a minority, but whichever one you end up using you should think about what type of stress your join will be under and use the right tool for the job.
I'm not the OP, but I bite that bullet all day long. My parents' last wishes are only relevant in two ways that I can see:
-
Their values are congruent with my own. If my parents last wishes are morally repugnant to me I certainly feel no obligation to help execute those wishes. Thankfully, in real life my parents values and wishes are fairly congruent with my own, so their request is likely to be something I could evaluate as worthy on its own terms; no obligation needed.
-
I wish to uphold a norm of last wishes being fulfilled. This has to meet a minimum threshold of congruence on point 1) above, but if I expect to have important last wishes that I will be unable to fulfill in my lifetime, I may want to promote this norm of paying it forward. Except I'm not convinced doing so is actually very effective; surely it's better for me to work towards my own goals rather than work towards others in the hope it upholds a norm that will get my goals carried out later. Or, if my goals are beyond my own ability to execute then surely I should be working to get those goals accepted by more people on their own terms, rather than as an obligation to me.
Facts and data are of limited use without a paradigm to conceptualize them. If you have some you think are particularly illuminative though by all means share them here.
As a layperson, the problem has been that my ability to figure out what's true relies on being able to evaluate subject-matter experts respective reliability on the technical elements of alignment. I've lurked in this community a long time; I've read the Sequences and watched the Robert Miles videos. I can offer a passing explanation of what the corrigibility problem is, or why ELK might be important.
None of that seems to count for much. Yitz made what I thought was a very lucid post from a similar level of knowledge, trying to bridge that gap, and got mostly answers that didn't tell me (or as best I can tell, them) anything in concept I wasn't already aware of, plus Eliezer himself being kind of hostile in response to someone trying to understand.
So here I find myself in the worst of both worlds; the apparent plurality of the LessWrong commentariat says I'm going to die and to maximise my chance to die with dignity I should quit my job and take out a bunch of loans try to turbo through an advanced degree in machine learning, and I don't have the tools to evaluate whether they're right.
https://onlinelibrary.wiley.com/doi/abs/10.1111/agec.12116
This indicates a range of .05 to .40. That's congruent with my experience in the ag industry; farmers tend to be risk-averse concerning price volatility and as such rarely scale up total production massively.
You can hedge against that volatility to some extent by signing purchase contracts in the spring during planting, but buyers obviously offer such contracts based on their own desire to not be stuck buying high at harvest time, so the hedging can't totally resolve the problem.
There's also the agronomy of it to consider in some cases; sustainable crop cycles don't always allow for agile reallocation of land.
I don't understand how this isn't just making friends and encouraging the formation of friend groups based on common interests.
For the record, ISO 3103 is in no way optimized for a tasty cup of tea; it's explicitly standardized. Six minutes of brewing with boiling water can "scorch" certain teas by over-extracting tannins and other bitter compounds. If you dislike tea there's a decent chance you would like it better with shorter brews or lower temperature water (I use 90C water for my black teas and 85C for greens, for example).
I find myself concerned. Steven Pinker's past work has been infamously vulnerable to spot-checks of citations, leading me to heavily discount any given factual claims he makes. Is there reason to think he has made an effort here that will be any better constructed?
I don't necessarily agree with your impression of the McAfee thing. The man was by all accounts a very strange person; it doesn't seem overly credulous to think that he might have been both suicidal and paranoid about being murdered and made to look like a suicide.
Your notation is confusing but I achieved a similar result.
>It seems to me much safer to lay the burden of proof on the moral indulgence--at very least, the burden of proof shouldn't always rest on the demands of conscience.
I think I disagree. It seems to me that moral claims don't exist in a vacuum, they require a combination of asserted values and contextualizing facts. If the contextualizing facts are not established, the asserted value is irrelevant. For instance, I might claim that we have a moral duty not to brush our hair because it produces static electricity, and static electricity is a painful experience for electrons. The asserted value is preventing suffering, which you might agree with, but my contextualizing facts are highly disputable, so you're unlikely to shave your head and never wear another wool sweater just to be on the safe side.
It seems to me the burden of proof lies with the side making a claim further away from the socially established starting point, not necessarily either the conscience claimer or the indulgence claimer. In the case of animal welfare, I think most people already believe all the facts they need to conclude that harming chickens is morally bad and thus it makes more sense to ask them to justify the special pleading on behalf of the poultry industry.
One human's moral arrogance is another human's Occam's razor. The evidence suggests to me, on grounds of both observation (very small organisms demonstrate very simple behaviour not consistent with a high level awareness) and theory (very small organisms have extremely minimal sensory/nervous architecture to contain qualia) that dust-mites are morally irrelevant, and the chance that I am mistaken in my opinion amounts to a Pascal's Mugging.
From Ozy:
"I recently read an essay by Peter Singer, Ethics Beyond Species and Beyond Instincts, in which he defined the moral as that which is universalizable, in this sense: “We can distinguish the moral from the nonmoral by appeal to the idea that when we think, judge, or act within the realm of the moral, we do so in a manner that we are prepared to apply to all others who are similarly placed.”
I read that, sat back, and said to myself: “I cannot do morality.”
I cannot do it in the same sense that an alcoholic cannot drink, and a person with an eating disorder cannot go on a diet. I am incapable of engaging with universalizable morality in a way that does not cause me severe mental harm. While I can reject a universalizable moral claim on an intellectual level, I am incapable of rejecting them– no matter how absurd or contradictory to other things I accept– on an emotional level. If I fail to live up to such a claim, I will hate myself and curl in a ball and be utterly nonfunctional for a few hours, causing harm to both myself and those who have to put up with me.
So (with much backsliding) I have started to make an effort to weed out the universalizable morality from my brain. I do things I want to do, and I don’t do things I don’t want to do."
https://thingofthings.wordpress.com/2016/06/13/assorted-thoughts-on-scrupulosity/
You and your girlfriend seem to have adopted a philosophical standard of morals which humans cannot uphold. I happen to believe that the case for the moral weight of organism lacking central nervous systems is extremely weak, but resisting the temptation to dismiss your position on those grounds alone I would say that if your slime civilization was proven real tomorrow, then there would be nothing to do except acknowledge the tragedy and move on with life. It's not like human-dominated environments make up a majority of those that are so theoretically miserable for ants and dust mites and the bugs in Brian Tomasik's compost, so even radical anti-natalism would accomplish a statistical nothing. If the ants suffered as you killed them, then the tragedy is not that you did it but that those ants were born into a world so hostile that if you hadn't killed them because they can't live in your apartment, then they would have been eaten by birds, or at war with other colonies, or frozen/drowned/dehydrated by the millions thanks to the weather.
Thankfully I do believe the case for the moral worth of ants is weak, so I hope you will consider seeking out counselling on how to reduce your/your girlfriend's apparent feelings of shame for the largely hypothetical moral suffering you worry about causing.
I am not a true expert, but there is one major element of this narrative that most coverage leaves out— no matter what happens to the short-sellers, the price of Gamestop and other short squeezed stocks must eventually normalize to a "truer" valuation.
I have seen a truly alarming lack of recognition of this fact, with some people apparently believing the squeezed price is the new normal for GME. Here's why that probably isn't the case:
The value of a stock is tied to two factors. One is (broadly) the cash flows one can expect to receive in the form of dividends and other shareholder benefits, the other is the expectation of the stock's value appreciating. Market manipulation like the current squeeze can cause the price of a stock to inflate based on that second factor. As the archetypal example, we look to the housing crash that caused the '08 recession. Thousands of mortgages were given out because it was thought that home prices would continue to rise indefinitely, meaning the loans were low risk (because even if the home buyer couldn't make their payment, the bank could seize the house and not take a loss). This was fine until it suddenly wasn't anymore; the assets lost perceived value, and the remaining fundamentals, i.e. homeowners' ability to service their debts, was not up to the task of keeping the banks solvent.
For Gamestop, I'm told there is some reason to think their fundamentals are getting better from where they were one year ago, but I have seen no compelling reasons that those fundamentals will deliver the kind of dividends that would traditionally command such share prices.
When the short squeeze passes, some Wall Street firms will have taken a big loss, but many small investors will be left holding a stock that may still nominally bear a $300+ price per share, but will probably not be able to deliver the same cash flows or stability as holding the same amount of a business with stronger fundamentals than GameStop. In the absence of people shorting, you end up deciding whether to keep your money tied up in GME, which will return $X over however long you hold it, or some other stock that could return $2x or $3x. At this point, after the short squeeze is resolved, the price will start to fall again.
The investors who were able to sell the $300 stocks to firms obligated to meet short contracts will realize a big cash gain, but anyone left holding the stock after that are likely to be in a seriously bad way.
This, of course, is not investment advice. If I knew exactly when the people holding GME were going to get nervous and try to liquidate, I could just take out new shorts and get rich ( and if enough people did that maybe WSB would just try to squeeze those shorts again!). What all of this boils down to is that this is not the new normal, it is a speculation bubble, and bubbles pop.
Saskatchewanian checking in here. As with your Vancouver Island example, there's a lot of heterogeneity here too. The south of the province, where I grew up, has extremely low numbers of cases even relative to the sparse rural population, while anywhere north of Saskatoon where I currently live is doing fairly badly relative to their sparse rural population. I don't have a strong gears-level understanding of why this should be except some vague notion that the North sees more traffic entering and exiting in the course of resource extraction industries, and close living quarters associated with the same. Plus something something rampant spread in First Nations which I don't even want to get into.
The notion of weirdness points has never spoken to me, personally, because it seems to collapse a lot of social nuance into a singular dichotomy of weird/not weird, and furthermore that weirdness is in some sense measurable and fungible. Neither, I think, is true, and the framework ought to be dissolved. So what's goes into a "weirdness point"?
- How familiar is the idea? - Vegetarians/vegans are a little weird, but most people probably know a handful and most have a notion that those people care about animal welfare and maybe some even know about nutritional ideas or the effect of meat on the climate. Cryonicists are extremely uncommon and their philosophy is not widely spread so people need to do a bigger intellectual lift to understand them.
- How appropriate is the sharing? - A vegan has an understandable reason to mention their diet almost any time a meal is shared, but if they never stop talking about it at parties, people will be annoyed and less sympathetic. The appropriate time to bring up cryonics is... During discussions about philosophy of death? Futurism? Maybe you can get away with it if someone just asks you what you're reading lately.
- How demanding is the idea? - People tend to not be huge fans of being asked to do things they wouldn't normally want to do. This is of course a fundamental obstacle to anyone looking to change the world for the better, but it still bears consideration. More demanding ideas require more compelling evidence and more time to allow people to come around to them, and will often require a lighter touch up front to not be dismissed entirely.
All of these factors and more besides will constitute the weirdness of an idea, but to me none of them suggests the best strategy is to hide your ideas. It seems to me that dissolving a weirdness point just tells us something we could probably have figured out in the first place— weirdness exists only in social contexts and can thus be moderated by just developing better social skills. I can be honest about the vast majority of beliefs I hold by just picking the right moments to share them and choosing the way I frame them based on my understanding of the points above. That's not propaganda papering over a forgettable version of myself, it's just correct gameplay.
Are there any resources that amount to "80,000 Hours for (hopefully reformed) underachievers"? I've been weighing the possibility of going back to school in the hopes of getting into a higher-impact field, but my academic resume from my bachelor's is pretty lackluster, leaving me unsure where to start reconstruction. My mental health and general level of conscientiousness are both considerably improved from my younger years so I'm optimistic I can exceed my past self.
Not necessarily. If I am an academic whose research is undermined by bias, I may be irrational but not stupid, and if I am in a social environment where certain signals of stupid beliefs are advantageous, I may be stupid but not irrational. It seems to be the latter is more what the author is getting at.
See my comments above for some discussion of this topic. Broadly speaking we do know how to keep farmland productive but there are uncaptured externalities and other inadequacies to be accounted for.
That's fair, and I'm grumbling less as an ag scientist or policy person than as a layperson born and raised in the ag industry. It is my opinion that the commercial ag industry in my country both contains inadequacies and is a system of no free energy, to borrow from Inadequate Equilibria.
To elaborate, I observe the following facts:
- Conventional agriculture using fertilizer and pesticide creates negative externalities, notably by polluting runoff and consuming non-renewable resources (fertilizer is made from potash, a reasonably abundant but not infinite mineral which also creates a carbon footprint to mine).
- Organic agriculture sacrifices considerable output as practiced, and is not actually optimized for minimal environmental impact but rather to maximize appeal to the organic food market, and as such also contains negative externalities which are not currently captured.
- Almost no commercial agriculture in my area, organic or otherwise, incorporates livestock into land rotation cycles. Although I don't have sources at hand, I am under the impression that evidence suggests that grazing animals provide not just replenishment of macronutrients, but also help to maintain a robust and fertile microbiome. Although labour is a factor, consider that under status quo, ranchers own land, and farmers own different land, and that land changing hands once every several years would on its own be an improvement.
- Most commercial ag operations are extremely conservative with regard to implementing and operational changes, for good reason. Being subject to both global market fluctuations and climate fluctuations is an unenviable business position.
Combine all these things I have seen firsthand, and I do conclude there is a better global maximum out there somewhere. And granted, if I were appointed Ag Czar it would no doubt be a Great Leap Forward-like disaster because I don't have the in-depth knowledge required to overhaul a complex ecological and economic system.
To bring all this back to the original thesis of the post, the precise reason I raised these gripes is because I agree with jasoncrawford that the waterline for industrial literacy is too low and more people should have a basic grasp of how these systems work. But like the Gell-Mann in the apocryphal story about trusting the news, I looked at his list of "things people should know about industry" and thought "Well... I have something to add to that, if people are going to take this post as a starting point for things that are important to know".