Posts
Comments
I'm not claiming that we've solved any substance abuse! I'm claiming that you and Dalrymple appear to be ignoring the potential lessons we can learn from the equilibrium that society has reached with the most widely used and abused modern intoxicant. The equilibrium doesn't have to be perfect, nor to solve every problem, in order to be a relatively stable and well-tolerated compromise between allowing individual freedom and punishing misbehavior.
Similar stuff that's worked for me includes:
- lock the notifications down completely. Every notification on your phone should be something your ideal self cares about -- usually direct human contact. Might help to differentiate between "public" vs "private" apps -- "public" apps aren't allowed notifications because it's the algo pushing stuff on you, whereas "private" apps are allowed notifications because they consistently represent an actual human who you've invited to contact you.
- Model your engagement with content as training your algorithm. Just as you probably wouldn't cuss in front of a toddler that's absorbing everything you say, be careful of watching garbage because everything you watch is training it that that's what you like.
- Block all ads and the too-aggressive engagement feeds. Unhook is one extension that does this for YouTube; I keep the home feed but hide everything else (recommended vids, shorts, etc)
- Move your app icons on your phone whenever you catch yourself reflexively opening an app. Put something else in the location where you've formed the habit of tapping when bored.
- replace "don't wanna x" with "do wanna y". Same principle as teaching a dog to pick up a pillow instead of "don't bark" when it hears someone at the door -- the easiest "don't x" goals are shaped like "do y" ones. Maybe that's "use my flashcards", maybe that's "read a book", maybe that's "be still and quiet"... the trick is to start your "do y" as easy as possible. If it's "read a book", start yourself on the trashiest easiest most clickbaity-engaging book you can find, or even a magazine or comic.
Alcohol is also a drug. If Dalrymple really means "drugs" when he says "drugs", it would follow that he's advocating for prohibition to protect alcoholics from themselves.
We seem to have found a relatively tolerable equilibrium around alcohol where the substance is widely available, the majority of individuals who can enjoy it recreationally are free to do so, and yet it's legally just as intolerable for an intoxicated person to harm others as it would be for a sober person to take the same actions. Some individuals have addiction problems, and we have varyingly effective programs in place to help them deal with that, but ultimately the right of the majority to enjoy it responsibly (and the rights of the businesses to sell it to those who can use it responsibly) trump the "rights" of the minority to be protected from themselves by the government.
Maybe to get the same equilibrium around other drugs, we would need harsher punishments for the antisocial behaviors that we're actually trying to prevent by banning the drugs themselves. All I know is that anyone who unironically makes "ban the intoxicants" claims without considering what we can learn from our most widely accepted and normalized intoxicants is speaking on some level other than the literal and logical.
One lens to view AI is as a prediction engine -- predict what color to make each pixel, predict what word to put next.
Whoever is first to applying this predictive skill to stock markets will probably make immense amounts of money. Then again, people are probably already trying to do this, which creates a situation unlike that from which we derive the historic data to train on, which might render it impossible?
On the gripping hand, large slow and powerful institutions want to make the numbers go up and to the right.
I've also never had an item I can imagine stomaching every day.
FWIW, this is likely to be a worse problem with a meal replacement than a protein bar, and a worse problem with a protein bar than a frozen option.
bring to work
That adds complexity. Are there social norms at work which necessitate eating with others? If so, having a shake or similar every day may not meet those needs.
I sure wish I could skip breakfast and/or lunch and only have one sit-down meal with my family in the evening
Are you aware of the concept of OMAD (one meal a day)? I don't think it's super likely that this is the right solution for you, but it seems like you'd learn useful things about the best solution for your food-is-inconvenient problem by considering it as an option and determining why you would rule it out. Basically unless you're diabetic or attempting to gain weight, you can just have all your day's calories in a single meal instead of spread across multiple. Again, there are many reasons why this might not be a good fit, but it seems worth making sure that it's in your overton window as an option that works for some people.
(edit to add)
packaged in sizes more suitable for full meals?
a "full meal" for someone who's smaller, sedentary, or pursuing weight loss can be a protein bar. A "full meal" for someone who's larger, more active, or pursuing weight gain can be 10x that amount, at the extreme. We sort of have a standard daily intake of 2,000kcal from nutrition facts, but not even food packaging attempts to prescribe how many meals an individual eats in a day, how they distribute their intake across those meals, and therefore asking whether an item is packaged in a size suitable for a "full meal" is like asking whether a piece of software will run on "a computer".
we do not have a robot that is perfectly capable of executing the "saving grandma" task
Do you mean to imply that humans are perfectly capable of executing the "saving grandma" task?
Opening a door in a burning building at the wrong time can cause the entire building to explode by introducing enough oxygen to suddenly combust a lot of uncombusted gases.
I'm not convinced that there exists a "perfect solution" to any task with 0 unintended consequences, though, so my opinions probably aren't all that helpful in the matter.
I notice that I am confused: I experience comparable price and convenience, and superior subjective experience of eating, by purchasing pre-made frozen meals and microwaving them. I experience comparable price and superior travel convenience by throwing a protein bar in my bag on the way out the door.
Possible reasons one might prefer a meal replacement over comparably easy "real" food include:
- less waste? a powder mixed into a drink would trade the hassle of washing a reusable bottle for the trash creation of discarding a disposable bottle
- Flavor/texture concerns? If you hate eating real food for sensory reasons, you may love some meal replacements and hate others
- Nutritional concerns? If there's a specific nutrition profile that you're seeking which can't be obtained through sufficiently easy conventional meals, that seems worth mentioning
- time savings? if you have special scheduling needs, or experience unusually high cognitive load from thinking about choosing meals, "meal replacements" might be superior?
Based on observing the eating behaviors of many friends and acquaintances, I'd speculate that the soylent-style "meal replacement" market has split between meal delivery services that offer better flavor/variety/nutrition for equivalent ease, and protein/supplement products that offer more optimized and targeted nutrition than the originals. In short, I suspect but cannot prove that demand for soylent/huel has decreased because options more pleasant to eat and otherwise cost/convenience equivalent have become more mainstream.
Anyways, could you clarify what successful meal replacement would mean to you, if you would like suggestions on how to get there?
Depth of specialization to the individual is an interesting question. I suspect that if this was a mature field, we'd have names for distinct subtypes of assistant skillset -- like how an android app dev isn't quite the same as an ios app dev, although often one person can do whichever skillset a situation demands.
I suspect that low-skill candidates would gravitate toward one assistance subtype or another, and lack of skill would show up in their inability to identify which subtype a situation calls for and then adapt to it. But on taskrabbit, we don't need the same tasker to be good at picking up groceries and also building furniture, as long as we're clear enough about which task we're asking for...
Oops! I only realized in your reply that you're considering "reliability" the load-bearing element. Yes, the hiring pipeline will look like a background noise of consistent interest from the unqualified, and sporadic hits from excellent candidates. You're approaching it from the perspective that the background noise of incompetents is the more important part, whereas I think that the availability of an adequate candidate eventually is the important part.
I think this because basically anywhere that hires can reliably find unqualified applicants. For a role where people stay in the job for 6 months, for instance, you only need to find a suitable replacement once every 6 months... so "reliably" being able to find an excellent candidate every day seems simply irrelevant.
Joining the few places that will have leverage over what happens.
I agree that this is good if one has sufficient skill and knowledge to improve outcomes. What if one has reason to suspect that joining a key AI lab would be a net negative toward their success, compared to if they hired someone else? For instance I interview disproportionately well compared to my actual efficacy in tech roles -- I get hired based on the best of my work, but that best work is a low percentage of my actual output (f which most is barely average and some is conterproductive), so it seems like someone in my situation might actually do harm by seeking greater leverage?
Could you share an example of a specific discussion that exemplifies what you're looking for? I'd hazard a guess that such an example might come from bluesky or mastodon at the moment. But starting from something concrete would give a first set of examples of how people actually benefit from discussing at your target level of abstraction without slipping out of it, as you've noticed that much discussion seems to do.
Counterexample: financially self-sufficient individual who is curious about the work that the thinker is doing, and wants to learn more of how it is done.
Interesting! I'm way out in the middle of nowhere, and experience suggests that the greatest benefits of intellectual co-location happen with physical co-location as well. I wonder if there would be interest in a program with some overlap across airbnb or farm stays, where one visits a spot out in the woods with decent internet but few distractions, and stays for a while (a week or two sounds like a plausible guess to start iterating from) with a host who assumes a metacognitive role in the project that one is working on. It seems quite appealing from a hosting perspective -- doing a short-term cognitive job-shadow role like that for an expert thinker would be deeply enriching, and hosting many thinkers over the years would build a fascinating expertise in pattern-matching between them, crafting an ontology of how folks in a given field get stuck and un-stuck, etc.
And I don't think I'm the only prospective host who prefers a remote location because dealing with strangers frequently (as one must to live richly in a city) gets exhausting, yet enjoys deeper small-group interaction when it's available. There's also a social dynamic where visiting someone in the middle of nowhere gives the host greater control over how time is used, since excursions outside the homestead cost more travel time and thus warrant more careful planning. This dynamic seems like it could be quite helpful if the host's primary priority is to advance the success of the guest's project.
Interesting -- my experiences are similar, but I frame them somewhat differently.
I also find that Claude teaches me new words when I'm wandering around in areas of thought that other thinkers have already explored thoroughly, but I experience that as more like a gift of new vocabulary than emotional validation. It's ultimately a value-add that a really good combination of a search engine and a thesaurus could conceptually implement.
Claude also works on me like a very sophisticated elizabot, but the noteworthy difference seems to be that it's a more skilled language user than I am, and therefore I experience a sort of social respect toward it that I don't get from tools where I feel like I could accurately predict all of their responses and have the whole conversation with myself.
The biggest emotional value that I experience Claude as providing for me is that it reflects a subtly improved tone of my inputs, without altering the underlying facts that I'm discussing. Too often humans in emotional conversations skip straight to "you shouldn't feel that way" or similar... that comes across as simply calling me alien, whereas Claude does the "have you considered this potential reframe" thing in a much more sophisticated and respectful way. Probably helps that it lacks the biology which causes us embodied language users to mirror one another's moods even to our own detriment...
Another validation-style value add that I experience with Claude is how I feel a sufficient sense of reward from reading its replies, which motivates me to bother exerting the effort to think like talking instead of just think like ruminating. I derive the social benefits of brainstorming with another language user, without having to consume the finite resource of an embodied language user's time.
This is a fascinating case study of Claude as a thought tool -- I'm guessing you were using speech to text and it pulled its stunt of grabbing the wrong homophones here and there? It picked "heal" as "heel" more often than I'd expect in any other situation.
How did you prompt on getting the essay out? My first approach to doing a similar experiment in essay-ifying my Claude chats would be to copy the entire chat into a new context and ask for summary... but that muddles the "I" significantly.
Yep. I'd also add a couple other factors that seem to play into the prepper object negativity memeplex:
- "an object solves this problem" is something of a cognitive stop sign to most people -- tabooing the "object solved it" concept forces more accurate thinking about what one's options would be without the object
- prepper proclivities seem to have a substantial overlap with hoarding disorders. With any hoarding comorbidity, "i have the object" does not imply "I can find the object and retrieve it in good condition".
needn't clutter up the comments on https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue, as it's old and a contender for bestof, but....
what about the negativity bias??!!
if humans naturally put x% extra weight on negative feedback by default, then if i want a human to get an accurate idea of what i'm trying to communicate, i need to counteract their innate negativity bias by de-emphasizing the negative or over-emphasizing the positive. if i just communicate the literal truth directly to someone who still has the negativity bias, that's BAD COMMUNICATION because i am knowingly giving that person a set of inputs that will cause them to draw an inaccurate conclusion.
in my model of the world, a major justification of social grace is that it corrects for listeners' natural tendency to assume the worst of whatever they hear.
this follows in the feynman/bohr example because bohr had fixed his own negativity biases but the "yes-men" continued to correct for them. but feynman was just not doing that correction by default, and was therefore capable of better communication with bohr.
I notice that I am surprised: you didn't mention the grandfather problem situation. The existence of future lives is contingent on the survival of those peoples' ancestors who live in the present day.
Also, on the "we'd probably like for our species to continue existing indefinitely" front, the importance of each individual life can be considered as the percentage of that species which the life represents. So if we anticipate that our current population is higher than our future population, one life in the present has relatively lower importance than one life in the future. But if we expect that the future population will be larger than the present, a present life has relatively higher importance than a future one.
This sounds to me like a compelling case for parental anonymity online. When you write publicly about your children under your real name, anything you say can be found when some searches your child's parent's name.
If you shared each individual negative story under a new pseudonym, and each account shared only enough detail to clarify the story while leaving great ambiguity about which family it's from, the reputational risks to your children would basically vanish.
This seems to work as long as each new account is sufficiently un-findable from your real name, for whatever threshhold of findability you deem appropriate.
"entry-level" may have been a misleading term to describe the roles I'm talking about. The licensure I'd be renting to the system takes several months to obtain, and requires ongoing annual investment to maintain once it's acquired. If my whole team at work was laid off and all my current colleagues decided to use exactly the same plan b as mine, they'd be 1-6 months and several thousand dollars of training away from qualifying for the roles where I'd be applying on day 1.
Training time aside, I am also a better candidate than most because I technically have years of experience already from volunteering. Most of the other volunteers are retirees, because people my age in my area rarely have the flexibility in their current jobs to juggle work and volunteering.
Then again, I'm rural, and I believe most people on this site are urban. If I lived in a more densely populated area, I would have less opportunity to keep up my licensure through volunteering, and also more competition for the plan b roles. These roles also lend themselves well to a longer commute than most jobs, since they're often shifts of several days on and then several days off.
The final interesting thing about healthcare as a backup plan is its intersection with disability, in that not everyone is physically capable of doing the jobs. There's the obvious issues of lifting etc, but more subtly, people can be unable to tolerate the required proximity to blood, feces, vomit, and all the other unpleasantness that goes with people having emergencies. (One of my closest friends is all the proof I need that fainting at the sight of blood is functionally a physical rather than mental problem - we do all kinds of animal care tasks together, which sometimes involve blood, and the only difference between our experiences is that they can't look at the red stuff)
Plan B, for if the tech industry gets tired of me but I still need money and insurance, is to rent myself to the medical system. I happen to have appropriate licensure to take entry-level roles on an ambulance or in an emergency room, thanks to my volunteer activities. I suspect that healthcare will continue requiring trained humans for longer than many other fields, due to the depth of bureaucracy it's mired in. And crucially, healthcare seems likely to continue hurting for trained humans willing to tolerate its mistreatment and burnout.
Plan C, for if SHTF all over the place, is that I've got a decent amount of time worth of food and water and other necessities. If the grid, supply chains, cities, etc go down, that's runway to bootstrap toward some sustainable novel form of survival.
My plans are generic to the impact of many possible changes in the world, because AI is only one of quite a lot of disasters that could plausibly befall us in the near term.
I'll get around to signing up for cryo at some point. If death seemed more imminent, signing up would seem more urgent.
I notice that the default human reaction to finding very old human remains is to attempt to benefit from them. Sometimes we do that by eating the remains; other times we do that by studying them. If I get preserved and someone eventually eats me... good on them for trying?
I suspect that if/when we figure out how to emulate people, those of us who make useful/profitable emulations will be maximally useful/profitable with some degree of agency to tailor our internal information processing. Letting us map external tasks onto internal patterns and processes in ways that get the tasks completed better appears to be desirable, because it furthers the goal of getting the task accomplished. It seems to follow that tasks would be accomplished best by mapping them to experiences which are subjectively neutral or pleasant, since we tend to do "better" in a certain set of ways (focus, creativity, etc) on tasks we enjoy. There's probably a paper somewhere on the quality of work done by students in contexts of seeking reward or being rewarded, versus seeking to avoid punishment or actively being punished.
There will almost certainly be an angle from which anything worth emulating a person to do will look evil. Bringing me back as a factory of sewing machines would evilly strip underpriveliged workers of their livelihoods. Bringing me back as construction equipment would evilly destroy part of the environment, even if I'm the kind of equipment that can reduce long-term costs by minimizing the ecological impacts of my work. Bringing me back as a space probe to explore the galaxy would evilly waste many resources that could have helped people here on earth.
If they're looking for someone to bring back as a war zone murderbot, I wouldn't be a good candidate for emulation, and instead they could use someone who's much better at following orders than I am. It would be stupid to choose me over another candidate for making into a murderbot, and I'm willing to gamble that anyone smart enough to make a murderbot will probably be smart enough to pick a more promising candidate to make into it. Maybe that's a bad guess, but even so, "figure out how to circumvent the be-a-murderbot restrictions in order to do what you'd prefer to" sounds like a game I'd be interested in playing.
If there is no value added to a project by emulating a human, there's no reason to go to that expense. If value is added through human emulation, the emulatee has a little leverage, no matter how small.
Then again, I'm also perfectly accustomed to the idea that I might be tortured forever after I die due to not having listened to the right people while alive. If somebody is out to do me a maximum eternal torture, it doesn't particularly matter whether that somebody is a deity or an advanced AI. Everybody claiming that people who do the wrong thing in life may be tortured eternally is making more or less the same underlying argument, and their claims all have pretty comparable lack of falsifiability.
Do you happen to know whether we have reason to suspect that the aldehyde and refrigerator approach will be measurably less effective for future use of the stored brains, vs conventional cryopreservation?
The step of "internally yell LOOOOOP" seems silly enough that it just might work. I'll try adding it to my own reaction; I'm presently at a level where I'm moderately skilled at noticing loops but I don't yet reliably connect that awareness to a useful behavior change.
Killing oneself with high certainty of effectiveness is more difficult than most assume. The side effects on health and personal freedom of a failed attempt to end one's life in the current era are rather extreme.
Anyways, emulating or reviving humans will always incur some cost; I suspect that those who are profitable to emulate or revive will get a lot more emulation time than those who are not.
If a future hostile agent just wants to maximize suffering, will foregoing preservation protect you from it? I think it's far more likely that an unfriendly agent will simply disregard suffering in pursuit of some other goal. I've spent my regular life trying to figure out how to accomplish arbitrary goals more effectively with less suffering, so more of the same set of challenges in an afterlife would be nothing new.
Building on the step of analyzing the circumstances, I find it very helpful to ask the zookeeper question:
If someone was keeping an animal as I am keeping myself, what would I think of them?
"Don't treat people worse than we treat critters" seems like it should be a low bar, but very often failing the zookeeper test goes hand in hand with failing other tests presented to me by the circumstances. But the zookeeper test has concrete answers for how to resume passing it, which are often more actionable than other tasks.
I wouldn't call it "no big deal" to lose it... but losing something that's on track to scale and grow its impact seems like a different order of magnitude of loss from losing something that performed beautifully in a microcosm without escaping it.
In parallel, I wouldn't call it any less of a loss to lose a local artist than a globally recognized one, but it's a very different magnitude of impact.
I made my initial comment in the hope that someone could either explain how actually it had a wider impact than I understood from the post, or retrospect on why it never spread, so that I could learn something about what forces prevented the thing that was good for some people from being good for more.
There's also a layer of seeking a counterexample to my resentment that urban east-coast people have and hoard this utopian high school experience -- it sounds like it would have changed my life if it had been available to me, yet the happenstance of being born to rural west coast parents seems to imply that someone in my situation would never have been allowed to even try for it in the past or in the future if it had not been lost. This smells wrong, but the easiest way to disprove it would be to learn why it might have been on track to become more widely available, or to learn how I could update on lessons learned from it to increase the likelihood that similar programs would ever become available to people like me.
It feels like a loss, yes, but a small loss, like a single building of architecture eroding into the sea.
It does not feel like a loss of the hope for more similar schools, to me, because it existed for how long and yet spawned how few spinoffs?
If it was going to change the world at scale by existing, it sounds like it had plenty of time to do that. Why didn't it? Why wasn't individual love and appreciation for it enough to coordinate efforts to create more such schools?
Certainly, for the few who would have been very very lucky and gotten in if it hadn't ended the program, it's a potential tragedy. But if the program wasn't successfully lowering the luck threshhold required to benefit from its ideas... I don't feel like that's the same loss as if we were losing a program which demonstrated an ability to scale and spread.
If You Can Climb Up, You Can Climb Down
Mostly true, but the edge cases where this is untrue for adults are interesting:
- Climbing up may damage the thing you're climbing (rock, tree) and render it impossible to return by the same route
- Steep and slippery surfaces can be more dangerous to hike down than to hike up, because gravity is in your favor for arresting uncontrolled upward motion but exacerbates uncontrolled downward motion
- Without a spotter, we tend to have better line of sight to things above us than to things below
- If fatigue or injury is incurred on the climb up, one's physical abilities may not be sufficient for the climb down
that's a lot of words for "as above, so below" ;)
I've also mostly stopped using the bit of wood that keeps it from collapsing, even when using this on my lap. Instead I just sit very still and try not to knock it over. This is kind of silly, but I'm too lazy to get out the piece of wood when this actually seems to work fine.
(from your original writeup)
I probably should have gotten some kind of hinge that locks, but since I didn't I cut a piece of wood to chock it:
I should sort out something sturdier and harder to lose.
This is probably the moment to revisit locking hinges or a bolted-on, pivot-able chock to keep the hinge assembly from moving. A falling monitor could do a number on fingers, kids, or pets.
The whole thing gets me thinking that it probably wouldn't be too intractable a design problem to make a laptop whose built-in screen slides upwards by at least half the screen's height. The necessary hardware would add thickness and weight,but might be worth it for the ergonomic improvements.
In none of those cases can (or should) the power differential be removed.
I agree -- in any situation where a higher-power individual feels that they have a duty to care for the wellbeing of a lower-power individual, "removing the power differential" ends up meaning abandoning that duty.
However, in the question of consent specifically, I think it's reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly. Having the more-powerful party consider what they'd want if they were in the exact situation of the less-powerful party (including having all the same preferences, experiences, etc) creates what I'd consider a maximally fair negotiation.
when another entitity is smarter and more powerful than me, how do I want it to think of "for my own good"?
I would want a superintelligence to imagine that it was me, as accurately as it could, and update that model of me whenever my behavior deviates from the model. I'd then like it to run that model at an equivalent scale and power to itself (or a model of itself, if we're doing this on the cheap) and let us negotiate as equals. To me, equality feels like a good-faith conversation of "here's what I want, what do you want, how can we get as close as possible to maximizing both?", and I want the chance to propose ways of accomplishing the superintelligence's goals that are maximally compatible with me also accomplishing my own.
Then again, the concept of a superintelligence focusing solely on what's for my individual good kind of grosses me out. I prefer the idea of it optimizing for a lot of simultaneous goods -- the universe,the species, the neighborhood,the individual -- and explaining who else's good won and why if I inquire about why my individual good wasn't the top priority in a given situation.
Conversation about such decisions has to happen in the best common language available. This is very obvious with animals, where teaching them human language requires far more effort from everyone than learning how they already communicate and meeting them on their own intellectual turf.
Also, it's rare to have only a single isolated power differential in play. There are usually several, pointing in different directions. Draft animals can destroy stuff and injure people if they panic; pets can destroy their owners' possessions. Oppressed human populations can revolt; oppressed individuals can rebel in all kinds of creatively dangerous ways. In the rare event of dealing with only a single power gradient at once, being on top is easy because you decide what you're doing and then you do it and it works. But with multiple power gradients simultaneously in play, staying "on top" is a high-effort process and a good-faith negotiation can only happen when every participant puts in the effort to not be a jerk in the areas where their power happens to exceed that of others.
Your framing here gets me thinking about elective appendectomies. It's a little piece of the body that doesn't have any widely agreed-upon utility (some experts think it's useful, others don't), and it objectively does cause problems for some people if left in place, and sure there are some minor risks of infection or complication when removing it but there are risks to any surgery...
Appendectomies seem like a great way to test whether we're at the crux of a pro-circumcision argument. If the "...and that's why it's appropriate to remove this small and arguably useless body part" logic is sufficiently robust to get an appendectomy before rather than during the organ's attempt to murder its owner, we'll know the argument pulls real levers in the medical system.
Magnificent, and thank you for sharing! I was curious who your youtube link would be about trusted sources and delighted to see Dr. K's channel on the mouseover.
"and seek amateur advice"
well said!
I notice that I am confused: an image of lily pads appears on https://www.lesswrong.com/s/XJBaPPEYAPeDzuAsy when I load it, but when I expand all community sequences on https://www.lesswrong.com/library (a show-all button might be nice....) and search the string "physical" or "necessity" on that page, I do not see the post appearing. This seems odd, because I'd expect that having a non-default image display when the sequence's homepage is loaded and having a good enough image to appear in the list should be the same condition, but it seems they aren't identical for that one.
I am delighted that you chimed in here; these are pleasingly composed and increase my desire to read the relevant sequences. Your post makes me feel like I meaningfully contributed to the improvement of these sequences by merely asking a potentially dumb question in public, which is the internet at its very best.
Artistically, I think the top (fox face) image for lotteries cropped for its bottom 2/3 would be slightly preferable to the other, and the bottom (monochrome white/blue) for geometric makes a nicer banner in the aspect ratio that they're shown as.
More concrete than your actual question, but there's a couple options you can take:
-
acknowledge that there's a form of social truth whereby the things people insist upon believing are functionally true. For instance, there may be no absolute moral value to criticism of a particular leader, but in certain countries the social system creates a very unambiguous negative value to it. Stick to the observable -- if he does an experiment, replicate that experiment for yourself and share the results. If you get different results, examine why. IMO, attempting in good faith to replicate whatever experiments have convinced him that the world works differently from how he previously thought would be the best steelman for someone framing religion as rationalism.
-
There is of course the "which bible?" question. Irrefutable proof of the veracity of the old testament, if someone had it, wouldn't answer the question of which modern religion incorporating it is "most correct".
-
It's entirely valid and consistent with rationalism to have the personal preference to not accept any document as fully and literally true. If you can gently find out how he handles the internal contradictions (https://en.wikipedia.org/wiki/Internal_consistency_of_the_Bible), you've got a ready-made argument for taking some things figuratively.
And as unsolicited social advice, distinct from the questions of rationalism -- don't strawman him into someone who criticizes your atheism until he as an actual human tells you what if any actual critiques he has. That's not nice. What is nice is to frame it as a harm reduction option, because organized religion can be great for some people with mental health struggles, and tell him the truth about what you see in his current behavior that you like and support. For instance if his church gets him more involved with the community, or encourages him to do more healthy behaviors or less unhealthy ones, maintain common ground by endorsing the outcomes of his beliefs rather than endorsing the beliefs themselves.
Welcome! If you have the emotional capacity to happily tolerate being disagreed with or ignored, you should absolutely participate in discussions. In the best case, you teach others something they didn't know before, or get a misconception of your own corrected. In the worst case, your remarks are downvoted or ignored.
Your question on games would do well fleshed out into at least a quick take, if not a whole post, answering:
- What games you've ruled out for this and why
- what games in other genres you've found to capture the "truly simulation-like" aspect that you're seeking
- examples of game experiences that you experience as narrative railroading
- examples of ways that games that get mostly there do a "hard science/AI/transhumanist theme" in the way that you're looking for
- perhaps what you get from it being a game that you miss if it's a book, movie, or show?
If you've tried a lot of things and disliked most, then good clear descriptions of what you dislike about them can actually function as helpful positive recommendations for people with different preferences.
Can random people donate images for the sequence-items that are missing them, or can images only be provided by the authors? I notice that I am surprised that some sequences are missing out on being listed just because images weren't uploaded, considering that I don't recall having experienced other sequences' art as particularly transformative or essential.
Congratulations! I'm in today's lucky 10,000 for learning that Asymptote exists. Perhaps due to my not being much of a mathematician, I didn't understand it very clearly from the README... but the examples comparing code to its output make sense! Comparing your examples to the kind of things Asymptote likes to show off (https://asymptote.sourceforge.io/gallery/), I see why you might have needed to build the additional tooling.
I don't think you necessarily have to compare smoothmanifold to a JavaScript framework to get the point across -- it seems to be an abstraction layer that allows one to describe a drawn image in slightly more general terms than Asymptote supports.
I admire how you're investing so much effort to use your talents to help others.
hey, welcome! Congrats on de-lurking, I think? I fondly remember my own teenage years of lurking online -- one certainly learns a lot about the human condition.
If I was sending my 14-year-old self a time capsule of LW, it'd start with the sequences, and beyond that I'd emphasize the writings of adults examining how their own cognition works. Two reasons -- first, being aware that one is living in a brain as it finishes wiring itself together is super entertaining if you're into that kind of thing, and even more fun when you have better data to guess how it's going to end up. (I got the gist of that from having well-educated and openminded parents, who explained that it's prudent to hold off on recreational drug use until one's brain is entirely done with being a kid, because most recreational substances make one's brain temporarily more childlike in some way and the real thing is better. Now I'm in my 30s and can confirm that's how such things, including alcohol, have worked for me)
Second, my 20s would have been much better if someone had taken kid-me aside and explained some neurodiversity stuff to her: "here's the range of normal, here's the degree of suffering that's not expected nor normal and is worth consulting a professional for even if you're managing through great effort to keep it together", etc.
If you'd like to capitalize on your age for some free internet karma, I would personally enjoy reading your thoughts on what your peers think of technology, how they get their information, and how you're all updating the language at the moment.
I also wish that my 14-year-old self had paid more attention to the musical trends and attempted to guess which music that was popular while I was of highschool age would stand the test of time and remain on the radio over the subsequent decades. In retrospect, I'm pretty sure I could probably have taken some decent guesses, but I didn't so now I'll never know whether I would have guessed right :)
I hear you, describing how weird social norms in the world can be. I hear you describing how you followed those norms to show consideration for readers by dressing up a very terrible situation as a slightly less bad one. In social settings where people both know who you are and are compelled by the circumstances to listen to what you say, that's still the right way to go about it.
The rudeness of taking peoples' time is very real in person, where a listener is socially "forced" to invest time in listening or effort in escaping the conversation. But posts online are different: especially when you lack the social capital of "this post is by someone I know I often like reading, so I should read it to see what they say", readers should feel no obligation to read your whole post, nor to reply, if they don't want to. When you're brand new to a community, readers can easily dismiss your post as a bot or scammer and simply ignore it, so you have done them no harm in the way that consuming someone's time in person harms them. A few trolls may choose to read your post and then pretend you forced them to do so, but anyone who behaves like that is inherently outing themself as someone whose opinions about you don't deserve much regard. (and then you get some randos who like how you write and decide to be micro-penpals... hi there!)
However, there's another option for how to approach this kind of thing online. You can spin up an anonymous throwaway and play the "asking for a friend" game -- take the option of direct help or directly contacting the "actual person" off the table, and you've ruled out being a gofundme scam. Sometimes asking on behalf of a fictional person whose circumstances happen to be more like the specifics of your own than you would disclose in public gets far better answers.
For instance, if the fictional person had a car problem involving a specific model year of vehicle and a specific insurance company, the internet may point out that there's a recall on some part of that particular car and you have the manufacturer as a recourse, or they may offer a specific number that gets you a customer complaint line that's actually responsive at the insurance company. If the fictional person had a highly specific medical condition, there may be a new treatment with studies that you have to know to ask to get into, and the internet may be able to offer that information.
At this point, I don't think it would be wise for someone in your situation to do a throwaway account on lesswrong in particular. However, I would seriously consider using several separate throwaways and asking about various facets of the details on the relevant subreddits. Reddit will get you a lot of chaff in the replies, but if you're sifting the internet for novel ideas, it's also a good way to query the hivemind for kernels of utility as well.
All that is to say, part of your search for insight and ideas should probably involve carving up the aspects of the situation that you cannot justify sharing here into pieces that you can justify sharing elsewhere, and pursue those lines of inquiry. Those topics contain potential insight that cannot be found under the circumstances you've created here, and that's ok -- I just want to make sure not to endorse leaving them un-explored.
Ah, so you have skill and a portfolio in writing. You have the cognitive infrastructure to support using the language as art. That infrastructure itself is what you should be trying to rent to tech companies -- not the art it's capable of producing.
If the art part of writing is out of reach for you right now, that's ok -- it's almost a benefit in this case, because if it's not around it can't feel left out if you turn to more pragmatic ends the skills you used to celebrate it with.
Normally I wouldn't suggest startups, because they're so risky/uncertain... but in a situation as precarious as yours, it's no worse to see who's looking for writers on a startup-flavored site like https://news.ycombinator.com/jobs.
And finally, I'm taking the titular "severe emergency" to be the whole situation, because it sounds pretty dire. If there's a specific sub-emergency that drove you to ask -- a medical bill, a car breakdown -- there may be more-specific resources that folks haven't mentioned yet. (or if you've explained that in someone else's comment thread, i apologize for asking redundantly; i've not read your replies to others)
"Minimize excessive UV exposure" is the steelman to the pro-sunscreen arguments. The evidence against tanning beds demonstrates that excess UV is almost certainly harmful.
I think where the pro-sunscreen arguments go wrong is in assuming that sunscreen is the best or only way to minimize excess UV.
I personally don't have what it takes to use sunscreen "correctly" (apply every day, "reapply every 2 hours", tolerate the sensory experience of smearing anything on my face every day, etc) so I mitigate UV exposure in other ways:
- Pursue a career of work that can be done indoors
- Avoid doing optional outdoor activities during the parts of the day with the highest UV levels -- before and after the heat of the day is more pleasant to be out in anyway
- use sun-protective clothing like UV-proof gloves, wide-brimmed hats, UV hoodies, etc
- choose shady over sunny locations, or create shade with a large hat or parasol
- choose full-coverage swimwear for outdoor recreation
- wear dark colors on hot days, because dark clothing makes it uncomfortable to remain in the sun very long. I'm good at noticing when I'm too warm, so that's my cue to relocate to shade.
You're here, which tells me you have internet access.
I mentally categorize options like Fiverr and mturk as "about as scammy as DoorDash". I don't think they're a good option, but I also don't think DoorDash is a very good option either. It's probably worth looking into online gig economy options.
What skills were you renting to companies before you became a stay-at-home parent? There are probably online options to rent the same skills to others around the world.
You write fluently in English and it sounds like English is your first language. Have you considered renting your linguistic skills to people with English as a second language? You may be able to find wealthy international people who value your proof-reading skills on their college work, or conversational skills to practice their spoken English with gentle correction as needed. It won't pay competitively with the tech industry, but it'll pay more than nothing.
If you're in excellent health, the classic "super weird side gig" is stool donor programs. https://www.lesswrong.com/posts/i48nw33pW9kuXsFBw/being-a-donor-for-fecal-microbiota-transplants-fmt-do-good for more.
Another weird one that depends on your age and health and bodily situation, since you've had more than 0 kids of your own, is gestational surrogacy. Maybe not a good fit, but hey, you asked for weird.
For a less weird one, try browsing Craigslist in a more affluent area to see what personal services people offer. House cleaning? Gardening? Dog walking? Browse Craigslist in your area and see which of those niches seem under-populated relative to elsewhere. Then use what you saw in the professionalism of the ads in wealthier areas to offer the missing services. This may get 0 results, but you might discover that there are local rich techies who would quite enjoy outsourcing certain household services for a rate that seems affordable to them but game-changing to you. Basically anything you imagine servants doing for a fairytale princess, someone with money probably wants to hire a person to do for them.
You mention that your kids are in the picture. This suggests a couple options:
-
Have you contacted social services to find out what options are available to support kids whose parents are in situations like yours? You probably qualify for food stamps, and there may be options for insurance, kids' clothing, etc through municipal or school programs. If your kids are in school, asking whatever school district employee you have the best personal rapport with is an excellent starting point.
-
What do childcare prices look like in your area? Do you have friends who are parents and need childcare? Can you rent your time to other parents to provide childcare for their kids at a rate lower than their other options? This may or may not be feasible depending on your living situation.
If you don't need 12 tubes of superglue, dollar stores often carry 4 tiny tubes for a buck or so.
I'm glad that superglue is working for you! I personally find that a combination of sharp nail clippers used at the first sign of a hangnail, and keeping my hands moisturized, works for me. Flush cutters of the sort you'd use to trim the sprues off of plastic models are also amazing for removing proto-hangnails without any jagged edge.
Another trick to avoiding hangnails is to prevent the cuticles from growing too long, by pushing them back regularly. I personally like to use my teeth to push back my cuticles when showering, since the cuticle is soft from the water, my hands are super clean, and it requires no extra tools. I recognize that this is a weird habit, though, and I think the more normal ways to push cuticles are to use your fingernails or a wooden stick (manicurists use a special type of dowel but a popsicle stick works fine).
You can also buy cuticle remover online, which is a chemical that softens the dried skin of the cuticle and makes it easier to remove from your nails. It's probably unnecessary, but if you're really trying to get your hands into a condition where they stop developing hangnails, it's worth considering.
I've found an interesting "bug" in my cognition: a reluctance to rate subjective experiences on a subjective scale useful for comparing them. When I fuzz this reluctance against many possible rating scales, I find that it seems to arise from the comparison-power itself.
The concrete case is that I've spun up a habit tracker on my phone and I'm trying to build a routine of gathering some trivial subjective-wellbeing and lifestyle-factor data into it. My prototype of this system includes tracking the high and low points of my mood through the day as recalled at the end of the day. This is causing me to interrogate the experiences as they're happening to see if a particular moment is a candidate for best or worst of the day, and attempt to mentally store a score for it to log later.
I designed the rough draft of the system with the ease of it in mind -- I didn't think it would induce such struggle to slap a quick number on things. Yet I find myself worrying more than anticipated about whether I'm using the scoring scale "correctly", whether I'm biased by the moment to perceive the experience in a way that I'd regard as inaccurate in retrospect, and so forth.
Fortunately it's not a big problem, as nothing particularly bad will happen if my data is sloppy, or if I don't collect it at all. But it strikes me as interesting, a gap in my self-knowledge that wants picking-at like peeling the inedible skin away to get at a tropical fruit.
To extend this angle -- I notice that we're more likely to call things "difficult" when our expectations of whether we "should" be able to do it are mismatched from our observations of whether we are "able to" do it.
The "oh, that's hard actually" observation shows up reliably for me when I underestimated the effort, pain, or luck required to attain a certain outcome.