Posts
Comments
I notice that I am surprised: you didn't mention the grandfather problem situation. The existence of future lives is contingent on the survival of those peoples' ancestors who live in the present day.
Also, on the "we'd probably like for our species to continue existing indefinitely" front, the importance of each individual life can be considered as the percentage of that species which the life represents. So if we anticipate that our current population is higher than our future population, one life in the present has relatively lower importance than one life in the future. But if we expect that the future population will be larger than the present, a present life has relatively higher importance than a future one.
This sounds to me like a compelling case for parental anonymity online. When you write publicly about your children under your real name, anything you say can be found when some searches your child's parent's name.
If you shared each individual negative story under a new pseudonym, and each account shared only enough detail to clarify the story while leaving great ambiguity about which family it's from, the reputational risks to your children would basically vanish.
This seems to work as long as each new account is sufficiently un-findable from your real name, for whatever threshhold of findability you deem appropriate.
"entry-level" may have been a misleading term to describe the roles I'm talking about. The licensure I'd be renting to the system takes several months to obtain, and requires ongoing annual investment to maintain once it's acquired. If my whole team at work was laid off and all my current colleagues decided to use exactly the same plan b as mine, they'd be 1-6 months and several thousand dollars of training away from qualifying for the roles where I'd be applying on day 1.
Training time aside, I am also a better candidate than most because I technically have years of experience already from volunteering. Most of the other volunteers are retirees, because people my age in my area rarely have the flexibility in their current jobs to juggle work and volunteering.
Then again, I'm rural, and I believe most people on this site are urban. If I lived in a more densely populated area, I would have less opportunity to keep up my licensure through volunteering, and also more competition for the plan b roles. These roles also lend themselves well to a longer commute than most jobs, since they're often shifts of several days on and then several days off.
The final interesting thing about healthcare as a backup plan is its intersection with disability, in that not everyone is physically capable of doing the jobs. There's the obvious issues of lifting etc, but more subtly, people can be unable to tolerate the required proximity to blood, feces, vomit, and all the other unpleasantness that goes with people having emergencies. (One of my closest friends is all the proof I need that fainting at the sight of blood is functionally a physical rather than mental problem - we do all kinds of animal care tasks together, which sometimes involve blood, and the only difference between our experiences is that they can't look at the red stuff)
Plan B, for if the tech industry gets tired of me but I still need money and insurance, is to rent myself to the medical system. I happen to have appropriate licensure to take entry-level roles on an ambulance or in an emergency room, thanks to my volunteer activities. I suspect that healthcare will continue requiring trained humans for longer than many other fields, due to the depth of bureaucracy it's mired in. And crucially, healthcare seems likely to continue hurting for trained humans willing to tolerate its mistreatment and burnout.
Plan C, for if SHTF all over the place, is that I've got a decent amount of time worth of food and water and other necessities. If the grid, supply chains, cities, etc go down, that's runway to bootstrap toward some sustainable novel form of survival.
My plans are generic to the impact of many possible changes in the world, because AI is only one of quite a lot of disasters that could plausibly befall us in the near term.
I'll get around to signing up for cryo at some point. If death seemed more imminent, signing up would seem more urgent.
I notice that the default human reaction to finding very old human remains is to attempt to benefit from them. Sometimes we do that by eating the remains; other times we do that by studying them. If I get preserved and someone eventually eats me... good on them for trying?
I suspect that if/when we figure out how to emulate people, those of us who make useful/profitable emulations will be maximally useful/profitable with some degree of agency to tailor our internal information processing. Letting us map external tasks onto internal patterns and processes in ways that get the tasks completed better appears to be desirable, because it furthers the goal of getting the task accomplished. It seems to follow that tasks would be accomplished best by mapping them to experiences which are subjectively neutral or pleasant, since we tend to do "better" in a certain set of ways (focus, creativity, etc) on tasks we enjoy. There's probably a paper somewhere on the quality of work done by students in contexts of seeking reward or being rewarded, versus seeking to avoid punishment or actively being punished.
There will almost certainly be an angle from which anything worth emulating a person to do will look evil. Bringing me back as a factory of sewing machines would evilly strip underpriveliged workers of their livelihoods. Bringing me back as construction equipment would evilly destroy part of the environment, even if I'm the kind of equipment that can reduce long-term costs by minimizing the ecological impacts of my work. Bringing me back as a space probe to explore the galaxy would evilly waste many resources that could have helped people here on earth.
If they're looking for someone to bring back as a war zone murderbot, I wouldn't be a good candidate for emulation, and instead they could use someone who's much better at following orders than I am. It would be stupid to choose me over another candidate for making into a murderbot, and I'm willing to gamble that anyone smart enough to make a murderbot will probably be smart enough to pick a more promising candidate to make into it. Maybe that's a bad guess, but even so, "figure out how to circumvent the be-a-murderbot restrictions in order to do what you'd prefer to" sounds like a game I'd be interested in playing.
If there is no value added to a project by emulating a human, there's no reason to go to that expense. If value is added through human emulation, the emulatee has a little leverage, no matter how small.
Then again, I'm also perfectly accustomed to the idea that I might be tortured forever after I die due to not having listened to the right people while alive. If somebody is out to do me a maximum eternal torture, it doesn't particularly matter whether that somebody is a deity or an advanced AI. Everybody claiming that people who do the wrong thing in life may be tortured eternally is making more or less the same underlying argument, and their claims all have pretty comparable lack of falsifiability.
Do you happen to know whether we have reason to suspect that the aldehyde and refrigerator approach will be measurably less effective for future use of the stored brains, vs conventional cryopreservation?
The step of "internally yell LOOOOOP" seems silly enough that it just might work. I'll try adding it to my own reaction; I'm presently at a level where I'm moderately skilled at noticing loops but I don't yet reliably connect that awareness to a useful behavior change.
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one's life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it's far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I've spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Building on the step of analyzing the circumstances, I find it very helpful to ask the zookeeper question:
If someone was keeping an animal as I am keeping myself, what would I think of them?
"Don't treat people worse than we treat critters" seems like it should be a low bar, but very often failing the zookeeper test goes hand in hand with failing other tests presented to me by the circumstances. But the zookeeper test has concrete answers for how to resume passing it, which are often more actionable than other tasks.
I wouldn't call it "no big deal" to lose it... but losing something that's on track to scale and grow its impact seems like a different order of magnitude of loss from losing something that performed beautifully in a microcosm without escaping it.
In parallel, I wouldn't call it any less of a loss to lose a local artist than a globally recognized one, but it's a very different magnitude of impact.
I made my initial comment in the hope that someone could either explain how actually it had a wider impact than I understood from the post, or retrospect on why it never spread, so that I could learn something about what forces prevented the thing that was good for some people from being good for more.
There's also a layer of seeking a counterexample to my resentment that urban east-coast people have and hoard this utopian high school experience -- it sounds like it would have changed my life if it had been available to me, yet the happenstance of being born to rural west coast parents seems to imply that someone in my situation would never have been allowed to even try for it in the past or in the future if it had not been lost. This smells wrong, but the easiest way to disprove it would be to learn why it might have been on track to become more widely available, or to learn how I could update on lessons learned from it to increase the likelihood that similar programs would ever become available to people like me.
It feels like a loss, yes, but a small loss, like a single building of architecture eroding into the sea.
It does not feel like a loss of the hope for more similar schools, to me, because it existed for how long and yet spawned how few spinoffs?
If it was going to change the world at scale by existing, it sounds like it had plenty of time to do that. Why didn't it? Why wasn't individual love and appreciation for it enough to coordinate efforts to create more such schools?
Certainly, for the few who would have been very very lucky and gotten in if it hadn't ended the program, it's a potential tragedy. But if the program wasn't successfully lowering the luck threshhold required to benefit from its ideas... I don't feel like that's the same loss as if we were losing a program which demonstrated an ability to scale and spread.
If You Can Climb Up, You Can Climb Down
Mostly true, but the edge cases where this is untrue for adults are interesting:
- Climbing up may damage the thing you're climbing (rock, tree) and render it impossible to return by the same route
- Steep and slippery surfaces can be more dangerous to hike down than to hike up, because gravity is in your favor for arresting uncontrolled upward motion but exacerbates uncontrolled downward motion
- Without a spotter, we tend to have better line of sight to things above us than to things below
- If fatigue or injury is incurred on the climb up, one's physical abilities may not be sufficient for the climb down
that's a lot of words for "as above, so below" ;)
I've also mostly stopped using the bit of wood that keeps it from collapsing, even when using this on my lap. Instead I just sit very still and try not to knock it over. This is kind of silly, but I'm too lazy to get out the piece of wood when this actually seems to work fine.
(from your original writeup)
I probably should have gotten some kind of hinge that locks, but since I didn't I cut a piece of wood to chock it:
I should sort out something sturdier and harder to lose.
This is probably the moment to revisit locking hinges or a bolted-on, pivot-able chock to keep the hinge assembly from moving. A falling monitor could do a number on fingers, kids, or pets.
The whole thing gets me thinking that it probably wouldn't be too intractable a design problem to make a laptop whose built-in screen slides upwards by at least half the screen's height. The necessary hardware would add thickness and weight,but might be worth it for the ergonomic improvements.
In none of those cases can (or should) the power differential be removed.
I agree -- in any situation where a higher-power individual feels that they have a duty to care for the wellbeing of a lower-power individual, "removing the power differential" ends up meaning abandoning that duty.
However, in the question of consent specifically, I think it's reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly. Having the more-powerful party consider what they'd want if they were in the exact situation of the less-powerful party (including having all the same preferences, experiences, etc) creates what I'd consider a maximally fair negotiation.
when another entitity is smarter and more powerful than me, how do I want it to think of "for my own good"?
I would want a superintelligence to imagine that it was me, as accurately as it could, and update that model of me whenever my behavior deviates from the model. I'd then like it to run that model at an equivalent scale and power to itself (or a model of itself, if we're doing this on the cheap) and let us negotiate as equals. To me, equality feels like a good-faith conversation of "here's what I want, what do you want, how can we get as close as possible to maximizing both?", and I want the chance to propose ways of accomplishing the superintelligence's goals that are maximally compatible with me also accomplishing my own.
Then again, the concept of a superintelligence focusing solely on what's for my individual good kind of grosses me out. I prefer the idea of it optimizing for a lot of simultaneous goods -- the universe,the species, the neighborhood,the individual -- and explaining who else's good won and why if I inquire about why my individual good wasn't the top priority in a given situation.
Conversation about such decisions has to happen in the best common language available. This is very obvious with animals, where teaching them human language requires far more effort from everyone than learning how they already communicate and meeting them on their own intellectual turf.
Also, it's rare to have only a single isolated power differential in play. There are usually several, pointing in different directions. Draft animals can destroy stuff and injure people if they panic; pets can destroy their owners' possessions. Oppressed human populations can revolt; oppressed individuals can rebel in all kinds of creatively dangerous ways. In the rare event of dealing with only a single power gradient at once, being on top is easy because you decide what you're doing and then you do it and it works. But with multiple power gradients simultaneously in play, staying "on top" is a high-effort process and a good-faith negotiation can only happen when every participant puts in the effort to not be a jerk in the areas where their power happens to exceed that of others.
Your framing here gets me thinking about elective appendectomies. It's a little piece of the body that doesn't have any widely agreed-upon utility (some experts think it's useful, others don't), and it objectively does cause problems for some people if left in place, and sure there are some minor risks of infection or complication when removing it but there are risks to any surgery...
Appendectomies seem like a great way to test whether we're at the crux of a pro-circumcision argument. If the "...and that's why it's appropriate to remove this small and arguably useless body part" logic is sufficiently robust to get an appendectomy before rather than during the organ's attempt to murder its owner, we'll know the argument pulls real levers in the medical system.
Magnificent, and thank you for sharing! I was curious who your youtube link would be about trusted sources and delighted to see Dr. K's channel on the mouseover.
"and seek amateur advice"
well said!
I notice that I am confused: an image of lily pads appears on https://www.lesswrong.com/s/XJBaPPEYAPeDzuAsy when I load it, but when I expand all community sequences on https://www.lesswrong.com/library (a show-all button might be nice....) and search the string "physical" or "necessity" on that page, I do not see the post appearing. This seems odd, because I'd expect that having a non-default image display when the sequence's homepage is loaded and having a good enough image to appear in the list should be the same condition, but it seems they aren't identical for that one.
I am delighted that you chimed in here; these are pleasingly composed and increase my desire to read the relevant sequences. Your post makes me feel like I meaningfully contributed to the improvement of these sequences by merely asking a potentially dumb question in public, which is the internet at its very best.
Artistically, I think the top (fox face) image for lotteries cropped for its bottom 2/3 would be slightly preferable to the other, and the bottom (monochrome white/blue) for geometric makes a nicer banner in the aspect ratio that they're shown as.
More concrete than your actual question, but there's a couple options you can take:
-
acknowledge that there's a form of social truth whereby the things people insist upon believing are functionally true. For instance, there may be no absolute moral value to criticism of a particular leader, but in certain countries the social system creates a very unambiguous negative value to it. Stick to the observable -- if he does an experiment, replicate that experiment for yourself and share the results. If you get different results, examine why. IMO, attempting in good faith to replicate whatever experiments have convinced him that the world works differently from how he previously thought would be the best steelman for someone framing religion as rationalism.
-
There is of course the "which bible?" question. Irrefutable proof of the veracity of the old testament, if someone had it, wouldn't answer the question of which modern religion incorporating it is "most correct".
-
It's entirely valid and consistent with rationalism to have the personal preference to not accept any document as fully and literally true. If you can gently find out how he handles the internal contradictions (https://en.wikipedia.org/wiki/Internal_consistency_of_the_Bible), you've got a ready-made argument for taking some things figuratively.
And as unsolicited social advice, distinct from the questions of rationalism -- don't strawman him into someone who criticizes your atheism until he as an actual human tells you what if any actual critiques he has. That's not nice. What is nice is to frame it as a harm reduction option, because organized religion can be great for some people with mental health struggles, and tell him the truth about what you see in his current behavior that you like and support. For instance if his church gets him more involved with the community, or encourages him to do more healthy behaviors or less unhealthy ones, maintain common ground by endorsing the outcomes of his beliefs rather than endorsing the beliefs themselves.
Welcome! If you have the emotional capacity to happily tolerate being disagreed with or ignored, you should absolutely participate in discussions. In the best case, you teach others something they didn't know before, or get a misconception of your own corrected. In the worst case, your remarks are downvoted or ignored.
Your question on games would do well fleshed out into at least a quick take, if not a whole post, answering:
- What games you've ruled out for this and why
- what games in other genres you've found to capture the "truly simulation-like" aspect that you're seeking
- examples of game experiences that you experience as narrative railroading
- examples of ways that games that get mostly there do a "hard science/AI/transhumanist theme" in the way that you're looking for
- perhaps what you get from it being a game that you miss if it's a book, movie, or show?
If you've tried a lot of things and disliked most, then good clear descriptions of what you dislike about them can actually function as helpful positive recommendations for people with different preferences.
Can random people donate images for the sequence-items that are missing them, or can images only be provided by the authors? I notice that I am surprised that some sequences are missing out on being listed just because images weren't uploaded, considering that I don't recall having experienced other sequences' art as particularly transformative or essential.
Congratulations! I'm in today's lucky 10,000 for learning that Asymptote exists. Perhaps due to my not being much of a mathematician, I didn't understand it very clearly from the README... but the examples comparing code to its output make sense! Comparing your examples to the kind of things Asymptote likes to show off (https://asymptote.sourceforge.io/gallery/), I see why you might have needed to build the additional tooling.
I don't think you necessarily have to compare smoothmanifold to a JavaScript framework to get the point across -- it seems to be an abstraction layer that allows one to describe a drawn image in slightly more general terms than Asymptote supports.
I admire how you're investing so much effort to use your talents to help others.
hey, welcome! Congrats on de-lurking, I think? I fondly remember my own teenage years of lurking online -- one certainly learns a lot about the human condition.
If I was sending my 14-year-old self a time capsule of LW, it'd start with the sequences, and beyond that I'd emphasize the writings of adults examining how their own cognition works. Two reasons -- first, being aware that one is living in a brain as it finishes wiring itself together is super entertaining if you're into that kind of thing, and even more fun when you have better data to guess how it's going to end up. (I got the gist of that from having well-educated and openminded parents, who explained that it's prudent to hold off on recreational drug use until one's brain is entirely done with being a kid, because most recreational substances make one's brain temporarily more childlike in some way and the real thing is better. Now I'm in my 30s and can confirm that's how such things, including alcohol, have worked for me)
Second, my 20s would have been much better if someone had taken kid-me aside and explained some neurodiversity stuff to her: "here's the range of normal, here's the degree of suffering that's not expected nor normal and is worth consulting a professional for even if you're managing through great effort to keep it together", etc.
If you'd like to capitalize on your age for some free internet karma, I would personally enjoy reading your thoughts on what your peers think of technology, how they get their information, and how you're all updating the language at the moment.
I also wish that my 14-year-old self had paid more attention to the musical trends and attempted to guess which music that was popular while I was of highschool age would stand the test of time and remain on the radio over the subsequent decades. In retrospect, I'm pretty sure I could probably have taken some decent guesses, but I didn't so now I'll never know whether I would have guessed right :)
I hear you, describing how weird social norms in the world can be. I hear you describing how you followed those norms to show consideration for readers by dressing up a very terrible situation as a slightly less bad one. In social settings where people both know who you are and are compelled by the circumstances to listen to what you say, that's still the right way to go about it.
The rudeness of taking peoples' time is very real in person, where a listener is socially "forced" to invest time in listening or effort in escaping the conversation. But posts online are different: especially when you lack the social capital of "this post is by someone I know I often like reading, so I should read it to see what they say", readers should feel no obligation to read your whole post, nor to reply, if they don't want to. When you're brand new to a community, readers can easily dismiss your post as a bot or scammer and simply ignore it, so you have done them no harm in the way that consuming someone's time in person harms them. A few trolls may choose to read your post and then pretend you forced them to do so, but anyone who behaves like that is inherently outing themself as someone whose opinions about you don't deserve much regard. (and then you get some randos who like how you write and decide to be micro-penpals... hi there!)
However, there's another option for how to approach this kind of thing online. You can spin up an anonymous throwaway and play the "asking for a friend" game -- take the option of direct help or directly contacting the "actual person" off the table, and you've ruled out being a gofundme scam. Sometimes asking on behalf of a fictional person whose circumstances happen to be more like the specifics of your own than you would disclose in public gets far better answers.
For instance, if the fictional person had a car problem involving a specific model year of vehicle and a specific insurance company, the internet may point out that there's a recall on some part of that particular car and you have the manufacturer as a recourse, or they may offer a specific number that gets you a customer complaint line that's actually responsive at the insurance company. If the fictional person had a highly specific medical condition, there may be a new treatment with studies that you have to know to ask to get into, and the internet may be able to offer that information.
At this point, I don't think it would be wise for someone in your situation to do a throwaway account on lesswrong in particular. However, I would seriously consider using several separate throwaways and asking about various facets of the details on the relevant subreddits. Reddit will get you a lot of chaff in the replies, but if you're sifting the internet for novel ideas, it's also a good way to query the hivemind for kernels of utility as well.
All that is to say, part of your search for insight and ideas should probably involve carving up the aspects of the situation that you cannot justify sharing here into pieces that you can justify sharing elsewhere, and pursue those lines of inquiry. Those topics contain potential insight that cannot be found under the circumstances you've created here, and that's ok -- I just want to make sure not to endorse leaving them un-explored.
Ah, so you have skill and a portfolio in writing. You have the cognitive infrastructure to support using the language as art. That infrastructure itself is what you should be trying to rent to tech companies -- not the art it's capable of producing.
If the art part of writing is out of reach for you right now, that's ok -- it's almost a benefit in this case, because if it's not around it can't feel left out if you turn to more pragmatic ends the skills you used to celebrate it with.
Normally I wouldn't suggest startups, because they're so risky/uncertain... but in a situation as precarious as yours, it's no worse to see who's looking for writers on a startup-flavored site like https://news.ycombinator.com/jobs.
And finally, I'm taking the titular "severe emergency" to be the whole situation, because it sounds pretty dire. If there's a specific sub-emergency that drove you to ask -- a medical bill, a car breakdown -- there may be more-specific resources that folks haven't mentioned yet. (or if you've explained that in someone else's comment thread, i apologize for asking redundantly; i've not read your replies to others)
"Minimize excessive UV exposure" is the steelman to the pro-sunscreen arguments. The evidence against tanning beds demonstrates that excess UV is almost certainly harmful.
I think where the pro-sunscreen arguments go wrong is in assuming that sunscreen is the best or only way to minimize excess UV.
I personally don't have what it takes to use sunscreen "correctly" (apply every day, "reapply every 2 hours", tolerate the sensory experience of smearing anything on my face every day, etc) so I mitigate UV exposure in other ways:
- Pursue a career of work that can be done indoors
- Avoid doing optional outdoor activities during the parts of the day with the highest UV levels -- before and after the heat of the day is more pleasant to be out in anyway
- use sun-protective clothing like UV-proof gloves, wide-brimmed hats, UV hoodies, etc
- choose shady over sunny locations, or create shade with a large hat or parasol
- choose full-coverage swimwear for outdoor recreation
- wear dark colors on hot days, because dark clothing makes it uncomfortable to remain in the sun very long. I'm good at noticing when I'm too warm, so that's my cue to relocate to shade.
You're here, which tells me you have internet access.
I mentally categorize options like Fiverr and mturk as "about as scammy as DoorDash". I don't think they're a good option, but I also don't think DoorDash is a very good option either. It's probably worth looking into online gig economy options.
What skills were you renting to companies before you became a stay-at-home parent? There are probably online options to rent the same skills to others around the world.
You write fluently in English and it sounds like English is your first language. Have you considered renting your linguistic skills to people with English as a second language? You may be able to find wealthy international people who value your proof-reading skills on their college work, or conversational skills to practice their spoken English with gentle correction as needed. It won't pay competitively with the tech industry, but it'll pay more than nothing.
If you're in excellent health, the classic "super weird side gig" is stool donor programs. https://www.lesswrong.com/posts/i48nw33pW9kuXsFBw/being-a-donor-for-fecal-microbiota-transplants-fmt-do-good for more.
Another weird one that depends on your age and health and bodily situation, since you've had more than 0 kids of your own, is gestational surrogacy. Maybe not a good fit, but hey, you asked for weird.
For a less weird one, try browsing Craigslist in a more affluent area to see what personal services people offer. House cleaning? Gardening? Dog walking? Browse Craigslist in your area and see which of those niches seem under-populated relative to elsewhere. Then use what you saw in the professionalism of the ads in wealthier areas to offer the missing services. This may get 0 results, but you might discover that there are local rich techies who would quite enjoy outsourcing certain household services for a rate that seems affordable to them but game-changing to you. Basically anything you imagine servants doing for a fairytale princess, someone with money probably wants to hire a person to do for them.
You mention that your kids are in the picture. This suggests a couple options:
-
Have you contacted social services to find out what options are available to support kids whose parents are in situations like yours? You probably qualify for food stamps, and there may be options for insurance, kids' clothing, etc through municipal or school programs. If your kids are in school, asking whatever school district employee you have the best personal rapport with is an excellent starting point.
-
What do childcare prices look like in your area? Do you have friends who are parents and need childcare? Can you rent your time to other parents to provide childcare for their kids at a rate lower than their other options? This may or may not be feasible depending on your living situation.
If you don't need 12 tubes of superglue, dollar stores often carry 4 tiny tubes for a buck or so.
I'm glad that superglue is working for you! I personally find that a combination of sharp nail clippers used at the first sign of a hangnail, and keeping my hands moisturized, works for me. Flush cutters of the sort you'd use to trim the sprues off of plastic models are also amazing for removing proto-hangnails without any jagged edge.
Another trick to avoiding hangnails is to prevent the cuticles from growing too long, by pushing them back regularly. I personally like to use my teeth to push back my cuticles when showering, since the cuticle is soft from the water, my hands are super clean, and it requires no extra tools. I recognize that this is a weird habit, though, and I think the more normal ways to push cuticles are to use your fingernails or a wooden stick (manicurists use a special type of dowel but a popsicle stick works fine).
You can also buy cuticle remover online, which is a chemical that softens the dried skin of the cuticle and makes it easier to remove from your nails. It's probably unnecessary, but if you're really trying to get your hands into a condition where they stop developing hangnails, it's worth considering.
I've found an interesting "bug" in my cognition: a reluctance to rate subjective experiences on a subjective scale useful for comparing them. When I fuzz this reluctance against many possible rating scales, I find that it seems to arise from the comparison-power itself.
The concrete case is that I've spun up a habit tracker on my phone and I'm trying to build a routine of gathering some trivial subjective-wellbeing and lifestyle-factor data into it. My prototype of this system includes tracking the high and low points of my mood through the day as recalled at the end of the day. This is causing me to interrogate the experiences as they're happening to see if a particular moment is a candidate for best or worst of the day, and attempt to mentally store a score for it to log later.
I designed the rough draft of the system with the ease of it in mind -- I didn't think it would induce such struggle to slap a quick number on things. Yet I find myself worrying more than anticipated about whether I'm using the scoring scale "correctly", whether I'm biased by the moment to perceive the experience in a way that I'd regard as inaccurate in retrospect, and so forth.
Fortunately it's not a big problem, as nothing particularly bad will happen if my data is sloppy, or if I don't collect it at all. But it strikes me as interesting, a gap in my self-knowledge that wants picking-at like peeling the inedible skin away to get at a tropical fruit.
To extend this angle -- I notice that we're more likely to call things "difficult" when our expectations of whether we "should" be able to do it are mismatched from our observations of whether we are "able to" do it.
The "oh, that's hard actually" observation shows up reliably for me when I underestimated the effort, pain, or luck required to attain a certain outcome.
"time-consuming" does not cleanly encapsulate difficulty, because lots of easy things are time-consuming too.
Perhaps "slow to reward" is a better way to gesture at the phenomenon you mean? Learning a language takes a high effort investment before you can have a conversation; getting in shape takes a high effort investment before you see unambiguous bodily changes beyond just soreness. Watching TV and scrolling social media are both time-consuming, but I don't see people going around calling those activities difficult.
Green, on its face, seems like one of the main mistakes. Green is what told the rationalists to be more OK with death, and the EAs to be more OK with wild animal suffering. Green thinks that Nature is a harmony that human agency easily disrupts.
The shallow-green that's easy/possible to talk about characterizes humans as separate from or outside of nature. Shallow-green is also characteristic of scientists who probe and measure the world and present their findings as if the ways they touched the world to measure it were irrelevant -- in a sense, the changes made by the instruments' presence don't matter, but there's also a sense in which they matter greatly.
By contrast, imagine a deep-green: a perspective from which humanity is from and of nature itself. This deep-green is impractical to communicate about, and cutting it up into little pieces to try to address them one at a time loses something important of its nature.
One place it's relatively easy to point at this deep-green is our understanding what time means. It touches the way that we accept base-12 and base-60 in our clocks and calendars, and in the reasons that no "better" alternative has been "better" enough to win over the whole world.
The characterization of green as "harmony through acceptance" in your image from Duncan Sabien points at another interesting facet of green: "denial" of reality is antithetical to both "acceptance" and "rationality", albeit with slightly different connotations for each.
Then again, in this system I'd describe myself as having arrived at green through black, so perhaps it's only my biases talking.
I misread it as "murakami-sama" at first, which was also disproportionately charming.
It's clear to me from the post that to properly enjoy it as performance art, the audience is meant to believe that the music is AI-generated.
I don't read the post as disclosing how the music was "actually" made, in the most literal real-world sense.
Pretty cool, regardless, that we live in an era where 'people pretending to be AI making music' is not trivial to distinguish from 'AI trying to make music' :)
I like going barefoot. However, I live in a climate that's muddy for most of the year. When I'm entering and exiting my house frequently, being barefoot is impractical because the time it takes to adequately clean my feet is much greater than the time it takes to slip off a pair of shoes at the door.
Also, in the colder parts of the year, I find that covering my feet indoors allows me to be generally comfortable at lower ambient temperatures than I would require for being barefoot in the house. This isn't much of an issue during outdoor activities that promote circulation to the feet, but it's annoying when reading, at the computer, or doing other activities that involve staying relatively still.
"to clean house" as implication of violence...
Due to a tragic shortage of outbuildings (to be remedied in the mid term but not immediately), my living room is the garage/makerspace of my home. I cleaned as one cleans for guests last week, because a friend from way back was dropping by. I then got to enjoy a clean-enough-for-guests home for several days, which is a big part of why it is nice to be visited by friends un-intimate enough to feel like cleaning for.
Then my partner-in-crafts came over, and we re-occupied every table with a combination of resin casting and miniature clay sculpting shenanigans. It's an excellent time.
We also went shopping for fabric together because I plan to make a baby quilt kid-in-progress of the aforementioned friend from way back. Partner-in-crafts idly asked me when I was planning to do the quilt stuff, because historically I would be expected to launch into it immediately as soon as the fabric came out of the dryer.
However, I found something new in myself: A reluctance to start a new project without a clean place to start it in. I'm not sure where this reluctance came from, as I think it seems new, but I also think I like it. So I got to tidying up the stuff that was un-tidyable last night because the resin was still sticky, but is eminently tidyable now because it cured over time, and carefully examining my reluctance-to-tidy as it tried to yell at me.
In that reluctance-to-tidy, I find time travel again: We store information in the position of objects in our environment. Object location encodes memory, so moving someone else's objects has certain commonalities with the rewriting-of-memory that we call gaslighting when pathological.
For better or worse, my architecture of cognition defaults to relying on empathy twice over when reasoning about moving stuff that someone else was using, or someone else's stuff. By recognizing an object's location as a person's memory of where-they-left-it, I view moving it as rewriting that memory.
The double-empathy thing comes in where I reason about what moves of stuff it's ok to make. If I put the thing where the person will have an easy time finding it, if I model them well enough to guess correctly where they'll first look when they want it, then I can help them by moving it. I can move it from somewhere they'd look later to somewhere they'd look sooner, and thereby improve their life at the moment of seeking it, and that's a clearly good act.
That's the first empathy layer. The second empathy layer comes of a natural tendency to anthropomorphize objects, which I've considered trying to eradicate from myself but settled on keeping because I find it quite convenient to have around in other circumstances. This is the animism of where something "wants" to go, creating a "home" for your keys by the door, and so forth.
So there's 2 layers of modeling minds -- one of complex real minds who are likely to contain surprises in their expectations, and one of simple virtual "minds" that follow from the real-minds as a convenient shortcut. I guess one way to put it is that I figure stuff has/channels feelings kinda like how houseplants do -- they probably don't experience firsthand emotion in any way that would be recognizable to people, but there's a lot of secondhand emotion that's shown in how they're related to and cared for.
Not sure where I'm going with all that, other than noticing how the urge to tidy up can be resisted by the same aesthetic sensibility that says it's generally bad to erase anybody's memories.
Seconding the importance of insulation, especially for disaster preparedness and weathering utility outages.
If any of your friends have a fancy thermal camera, see if you can borrow it. If not, there are some cheap options for building your own or pre-built ones on ebay. The cheap ones don't have great screens or refresh rates, but they do the job of visualizing which things are warmer and which are cooler.
Using a thermal imager, I managed to figure out the importance of closing the window blinds to keep the house warm. Having modern high-efficiency windows lulls me into a false sense of security about their insulative value, which I'm still un-learning.
One thing to keep in mind, though, is that even though the electricity here is mostly produced by burning gas you do actually burn less gas by turning it into electricity and then using it to run a heat pump than just burning it for heat.
Fascinating! I guess it'd fall into the "more moving parts to break" bucket, but it gets me wondering about switching from my current propane HVAC to propane generator + electric heat pump.
Searching the web for models that do both in a single unit, I find a lot of heat pumps using propane as their refrigerant, but no immediate hits using it as their fuel.
I personally suspect we'll perpetually keep moving the goalposts so whatever AI we currently have is obviously not AGI because AGI is by definition better than what we've got in some way. I think AI is already here and performing to standards that I would've called AGI or even magic if you'd showed it to me a decade ago, but we're continually coming up with reasons it isn't "really" AGI yet. I see no reason that we would culturally stop that habit of insisting that silicon-based minds are less real than carbon-based ones, at least as long as we keep using "belongs to the same species as me" as a load-bearing proxy for "is a person". (load-bearing because if you stop using species as a personhood constraint, it opens a possibility of human non-people, and we all know that bad things happen when we promote ideologies where that's possible).
However, I'm doing your point (6) anyways because everybody's aging. If I believed in AGI being around the corner, I'd probably spend less time with them, because "real AGI" as it's often mythologized could solve mortality and give me a lot more time with them.
I'm also doing your point (8) to some degree -- if I expect that new tooling will obviate a skill soon, I'm less likely to invest in developing the skill. While I don't think AI will get to a point where we widely recognize it as AGI, I do think we're building a lot of very powerful new tools right now with what we've already got.
If you're talking about literal parades -- I lead them annually at a smallish renaissance fair. Turns out that people with the combination of willingness to run around in front of a group looking silly, and enough time anxiety to actually show up to the morning ones, are in short supply.
That parade goes where I put it. There are several possible paths through the faire and I choose which one the group takes, and make the appropriate exaggerated gestures to steer the front of the crowd in that direction, and then the rest follow.
I also play a conspicuous looking instrument in the parade at a small annual local event that we convene a "band" for, as well. Since the instrument is large and obvious, I'm typically shoved to the front of the group as we line up. I'm pretty sure that if I went off script and took the parade out of the gathering's area, they'd probably follow me, because nobody else is quite sure where we're supposed to be going. If I conspired with the other musicians to take the group out of the event, we could almost certainly make that happen. I'm curious how far down the road we could get the dancers following the parade before they realize something is amiss, but also really don't want to be the individual to instigate that sort of experiment.
Back in high school, I did marching band, I think if our leader had been misinformed about where we should go, we would have followed them anyway. That's mostly because marching band has an almost paramilitary obedience theme going on, and can get a bit culty about directors or leaders in my experience. Marching as a group also confers a certain immunity to individual responsibility as long as you're following your orders. There's this confidence that if the leader takes the group off course, that leader will be the only individual who's personally in trouble for the error. The group might get yelled at collectively for having followed, but no one person in the group is any more responsible for the error than any other, except for the leader.
From these experiences, I'd speculate that the reason we don't see literal parades being counterfactually led off course like that on a regular basis is because the dynamic of leading it disincentivizes abusing that power. Being chosen and trusted by a group to lead them in a public setting where any errors you make will be instantly obvious to all onlookers confers a powerful desire to not mess up.
exceedingly long and complex sentences
Break them down. Long sentences in a comfortable cadence like being punctuated by short ones.
giving masses of detail with little apparent regard for how much information the person at the other end actually needs
Give more regard to what the reader needs.
"stick-on" weird metaphors which appear randomly every time I’m afraid I’m being too technical or annoying (so you get a wall of annoying text with a bit of canned laughter in the middle…)
Have you asked readers whether they dislike the metaphors?
vague sentences that go around for a while as I’m slowly figuring out what I mean to say
Rewrite after discovering your own intent. That's what editing is for.
long paragraphs, etc.
Fortunately your keyboard has an enter key, with no limit of uses.
Also, I spend ages proofreading anything I write and worrying about it…
Write where you feel that the stakes are low. If the consequences of poor proofreading don't feel worth worrying about, you can practice the skill of worrying less.
I anticipate that if your experiment is successful in discovering underrated synergies between perks, your new perk combos will be adopted more widely, which will affect perk selection behavior in your opponents, which will in turn affect the efficacy of the new synergies.
If you were crowdsourcing the perk combo experiments across many players, the experiment's complexity would explode when you try to control for the impact of player skill or individual playstyle preferences on the quality of a build.
I wonder if you could simplify the combinatorics of perk combo testing by grouping and rating perks by category of theme. For instance, it may be much better to factor each specific perk or item by its mechanical impact, when there can be multiple impacts per item. Maybe perk X increases your shooting accuracy while decreasing your move speed. When you discover synergies with it, framing those synergies as "with anything that increases accuracy" or "with anything that decreases speed" or "with anything that both increases accuracy and decreases speed" will give you a better "shopping list" in the perk tree. "oh, I know the thing that increases my damage dealt when I'm moving slower works well with anything that makes me move slower", flavor of insights.
The problem space here also reminds me of a video I saw awhile back: https://www.youtube.com/watch?v=5oULEuOoRd0&ab_channel=NightHawkInLight
I think that's definitely an aspect of the interesting side: effective encryption relies on deep understanding of how well the opponent can break the encryption. It needs to be strong enough to seem certain it won't be broken in a reasonable timeframe, but that balances against being fast enough to encrypt/decrypt so it's practical to use.
The encryption metaphor also highlights a side of rationality as rendering one's thoughts and actions maximally legible to observers, which strikes me as being true in some ways and interestingly limited in others.
It is indeed tricky to measure this stuff.
E.g. I can’t ask an LLM to go found a new company, give it some seed capital and an email account, and expect it to succeed.
In general, would you expect a human to succeed under those conditions? I wouldn't, but then most of the humans I associate with on a regular basis aren't entrepreneurs.
There’s a different claim, “we will sooner or later have AIs that can think and act at least 1-2 orders of magnitude faster than a human”. I see that claim as probably true, although I obviously can’t prove it.
Without bounding what tasks we want the computer perform faster than the person, one could argue that we've met that criterion for decades. Definitely a matter of "most people" and "most computers" for both, but there's a lot of math that a majority of humans can't do quickly or can't do at all, whereas it's trivial for most computers.
With the core of rationalism being built from provable patterns of human irrationality, I wonder what "irrationalist" philosophy and behavior would look like.
What conclusions would follow from treating the human capacity for rational thought and behavior with the importance or mere obviousness more traditionally poured into attempting to understand and resolve our "irrationalities"?
There's the side from which the expected results look so obvious ("chaos, duh") that they don't seem worth thinking about. It's the boring one. There are others.
Part of the value of rational thought comes from its verifiability and replicability across many minds and eras. But that's not a proof that no thought which fails to verify, fails to replicate, is certain to be without value.
(swap in whatever comparable term suits you for 'value' in that -- I've tried playing the whack-a-mole of tabooing each term in turn which slips into that conceptual void, and concluded that having some linguistic placeholder there seems load-bearing for communication).
Choose some trivial, popular self-improvement thing you want. Something you wouldn't mind changing about your cognition if it was easy, but wouldn't be heartbroken not to change if it didn't happen. Find some free self-hypnosis audio for it online, and skim a transcript of the script to make sure it's content you're ok with lowering your defenses toward. Then pretend to be the kind of person who just plain thinks it's interesting and worth a try, and listen to it and relax into it.
If you've practiced self-awareness and self-reflection, you will probably have the experience where the parts of your mind you watch yourself with remain normal, while the parts they're watching get lightly hypnotized. If all of you gets hypnotized, that's cool too, you're back to normal at the end of the audio and you might accidentally get a personal change you don't hate.
It's tempting to categorize hypnosis as an intellectual pursuit if you haven't interacted with it much, but it's really got a lot more in common with physical practices than mental ones. As with many physical pursuits, the important bits happen in the parts of human experience that are the hardest to transfer between minds through language, so reading about it will convey much less useful understanding than just giving it a try.
at least, I'm assuming you want to understand it. Some stuff, trying to understand from language is about as effective as trying to "understand" a cuisine by reading a cookbook that calls for a bunch of spices you're unfamiliar with. Skim the cookbook to make sure you're not allergic to any of the known ingredients, maybe, then just go visit the restaurant down the street.
One approach that's helped me in the executive functioning department is choosing to believe that connecting long-term wants to short-term wants is itself a skill.
I don't want to touch a hot stove, and yet I don't frame my "not touching a hot stove" behavior as an executive function problem because there's no time scale on which I want it. I don't want to have touched the stove; that'd just hurt and be of no benefit to anybody.
I don't particularly right-now-want to go do half an hour of exercise and make a small increment of progress on each of several ongoing projects today, but I do frame that as an executive function problem, because I long-term-want those things -- I want to have done them.
It's tempting to default to setting first-order metrics of success: I'll know I did well if I'm in shape and my ongoing projects are completed on time, for instance. But I find it much more actionable and helpful to look at second-order metrics of success: is this approach causing me better or worse progress on my concrete goals than other approaches?
For me, shifting the focus from the infrequent feedback of project completion to the constant feedback of process efficacy is helpful for not getting bored and giving up. Shifting from optimizing outputs to optimizing the process also helps me look for smaller and more concrete indicators that the process is working. I personally find that the most concrete and reliable "having my shit together" indicator is whether I'm keeping my home tidy, because that's always the first thing to go when I start dropping the ball on progress on my ongoing tasks in general. Yours may differ, but I suspect that addressing the alignment problem of coordinating your short-term wants with your long-term wants may be a more promising approach than trying to brute force through the wall of "don't wanna".