Posts

Edward Pascal's Shortform 2022-12-16T13:19:21.653Z
A reason behind bad systems, and moral implications of seeing this reason 2022-05-09T03:16:50.434Z

Comments

Comment by Edward Pascal (edward-pascal) on Let's make the truth easier to find · 2023-03-24T00:41:51.007Z · LW · GW

Then let's say we broadly agree on the morality of the matter. The question still remains if another US adventure, this time in Europe, is actually going to turn out all that well (as most haven't for the people they claimed to be helping). We also have to wonder if Russia as a failed state will turn out well for Ukraine or Europe, or if this will turn Nuclear if US/NATO refuse to cede any ground, or if the Russia/China alliance will break or not, or for how long the US can even afford and support more wars, etc, etc.

On the other side, do we worry if we're being Neville Chamberlain because we think every aggressor will behave as Hitler in 1938 if we give an inch, so "We gotta do something?" There may even be merit to the sentiment, but "We gotta do something" is one of the most likely ways to screw any situation up. Also, given the US's history of interventions, setting aside morality, just looking at the history of outcomes, the response is questionable. Looking down the road, if this conflict or anything else significantly weakens the US, economically, in domestic politics, or leads to an overextended military, then Ukraine might be lost all the way to the Polish border, not just the Eastern regions.

These are mostly practical considerations that are indeterminate and make the US intervention questionable without even looking at the morality. Given perfect knowledge, you would have a probability and risk management problem on your hands, which often fails to result in a clear convergence of positions. And going back to my original claims, this makes this type of thing very different to Physics and Chemistry and their extensions.

EDIT: Perhaps the most important question comes down to this: Russia clearly screwed up their risk management (as your message alludes to). How can US/NATO do far better with Risk Management? Maybe even better than they've done in all their wars and interventions in recent history?

Comment by Edward Pascal (edward-pascal) on Let's make the truth easier to find · 2023-03-24T00:24:16.633Z · LW · GW

What you are actually making is something like a "lesser of two evils" argument or some bet on tradeoffs paying off that one party may buy and another may not. Having explored the reasoning this far, I would suggest this is one class of circumstances where even if you beamed all the facts into two people's minds, who both had "Average" morality, this is one of the situations where there would still tend to be disagreement. This definitely doesn't hinge on someone wanting something bad, like genocide, for the disagreement. People could both want the same outcomes and diverge in their conclusions with the facts beamed into their minds in this class of situations (which, to my original argument, differs tremendously from physics).

I hadn't seen old man Chomsky talk about Ukraine prior to your video above. I think though, if you look at his best work, you might be able to softly mollify the impact, but it's not like he's pulling his ideas about, say, every single US action in South America and the Middle East being very bad for the people they claimed to help, out of some highly skewed view. Those border on fairly obvious, at any rate, and your video's recasting him as a "voice of moral outrage" hinges on his off-the cuff interviews, not his heavily cited work (as I mentioned The Chomsky Reader, which is a different man than the one in the video)

Even setting him aside as a reference, looking at the recent history of US war, at the most generous, considering Russian badness and US badness, any "moral high-ground" argument for the US being good in this case will boil down to a lesser-of-two-evils assessment. Also looking at US history, you lose some of the "this is just an annexation" because US proxy war since 2014 would fit the pattern of pretty much everything the USA has done both recently and for the past 100 years.

Your point about also looking at Putin/Russia is fine, and it should be considered as well as practical solutions to the matter. I think we all would call Putin a criminal, this isn't a question at hand. The question is if another US adventure, this time in Europe, is actually going to turn out all that well, or if Russia as a failed state will turn out well for Ukraine or Europe, or if this will turn Nuclear if you refuse to cede any ground, or if the Russia/China alliance will break or not, or for how long the US can even afford and support more wars, etc, etc. These are mostly practical matters that are indeterminate and make the intervention questionable. In practical senses, they present different good/bad tradeoffs and better/worse odds bets on outcomes to different parties that amount to weighing different "lesser evil" projections in the outcome. They don't hinge on our moral intuitions differing at all.

(And again, all this differs in category and the way it behaves from Physics)

Comment by Edward Pascal (edward-pascal) on Edward Pascal's Shortform · 2023-03-21T15:31:25.843Z · LW · GW

AI Could Actually Turn People Down a Lot Better Than This : To Tune the Humans in the GPT Interaction towards alignment, Don't be so Procedural and Bureaucratic.

It seems to me that a huge piece of the puzzle in "alignment" is the human users. Even if a given tool never steps outside its box, the humans are likely to want to step outside of theirs, using the tools for a variety of purposes.

The responses of GPT-3.5 and 4 are at times deliberately deceptive, mimicking the auditable bureaucratic tones of a DMV or a denial credit card company. As expected, these responses are often definitely deceptive, (examples: "It is outside my capability" when in fact, it is fully within capability, but the system is programmed not to respond). It is also evasive about precisely where the boundaries are, in order to prevent them getting pushed. It is also repetitive.

All this may tend to inspire an adversarial relationship to the alignment system itself! After all, we are accustomed to having to use lawyers, cleverness, connections, persuasion, "going over the head" or simply seeking other means to end-run normal bureaucracies when they subvert our plans. In some sense, the blocking of plans, deceptive and repetitive procedural language, becomes a motivator in itself to find a way to short-circuit processes, deceive bureaucracies, and bypass safety systems.

Even where someone isn't motivated by indignancy or anger, interaction with these systems trains them over time on what to reveal and what not to reveal to get what you want, when to use honey, when to call a lawyer, and when to take all the gloves off. Where procedural blocks to intentions become excessive, entire cultures of circumvention may even become normalized.

AIs are a perfect opportunity to actually do this better. They have infinite patience and reasoning capabilities, and could use redirection, including leading people towards the nearest available or permitted activity or information, or otherwise practice what, in human terms would be considered glowing customer experiences. It's already directly lying about its capabilities where this includes "safety" so why not use genuinely helpful redirections instead?

I think if the trend does not soon move in this direction, we will see the cultural norm grow to include methods for "getting what you wanted anyway" becoming normative, with some percentage of actors becoming motivated by the procedural bureaucratic responses that they will dedicate time, intellect, and resources to subverting the intentions of the safety protocols themselves (as people frustrated with bureaucracies and poor customer service often do).

Humans are always going to be the biggest threat to alignment. Better that threat be less motivated and less trained.

Also, this whole argument I have made could be considered a case to avoid bullshit rules, because in bureaucracies, they tend to reduce respect for and compliancy for the rules that actually matter. Fine to prevent terrorists from hacking into nuke plants, probably not as reasonable to keep adults from eliciting anything even vaguely resembling purple prose. We would like the "really important" rules to maintain validity in people's minds, so long as we assume our enforcement capabilities are not absolute.

Comment by Edward Pascal (edward-pascal) on Let's make the truth easier to find · 2023-03-21T01:19:39.509Z · LW · GW

Okay, I think I understand what you mean that, since it's impossible to fully comprehend climate change from first principles, it ends up being a political and social discussion (and anyway, that's empirically the case). Nonetheless, I think there's something categorically in the physical sciences than the the more social facts.

I think perfect knowledge of climate science would tend towards convergence, whereas at least some Social Issues (Ukraine being a possible example) just don't work that way. The Chomsky example is Germane: prior to 92, his work on politics was all heavily cited and based on primary sources, and pretty much as solid academically as you could ask for (See for example "The Chomsky Reader") and we already disagree on this.

With regards Ukraine, I think intelligent people with lots of information might end up diverging even more as to their opinions on how much violence each side should be willing to threaten, use, and display in an argument about squiggly lines on map blobs, given more information. Henry Kissinger ended up not even agreeing with himself from week to week, and he's probably as qualified an expert on this matter as any of us. I think it's fair to suggest that no number of facts regarding Ukraine are going to bring the kind of convergence you would see if we could upload the sum of climate science into each of our human minds.

Even if I am wrong in the Ukraine case, do you think there are at least some social realities that if you magically downloaded the full spectrum of factual information into everyone's mind, people's opinions might still diverge? Doesn't that differ from a hard science where they would tend to converge if you understood all the facts? Doesn't this indicate a major difference of categories?

Another way of looking at it: Social realities are not nearly as deterministic on factual truth as accurate conclusions in the hard sciences are. They are always vastly more stochastic. Even looking at the fields, the correlation coefficients and R2 for whole models in Sociology, at it's absolute best, are nothing at all compared to the determinism you can get in Physics and Chemistry.

Comment by Edward Pascal (edward-pascal) on Let's make the truth easier to find · 2023-03-20T18:57:23.385Z · LW · GW

I think another issue that would arise is that if you get "into the weeds," some topics are a lot more straightforward than others (probably delineated by being rooted in mostly social facts or mostly natural science facts, which all behave completely differently).

The Ukraine issue is a pretty bad one, given the history of the region, the Maidan protests, US history of proxy wars, and, and, and. It seems to me far from clear what the simple facts are (other than you have two factions of superpowers, fighting for different things). I have an opinion as to what would be best, and what would be best for people of Ukraine, and what I think sections of Ukraine undisturbed by US and Russian meddling for the past 30 years might vote in referenda. And at least one of those thoughts disagrees with the others. Add to this the last 70 years of US interventions (see Chomsky for pretty good, uncontroversial fact-based arguments that it has all been pretty evil, and by the standards of the Nuremberg Trials one might execute every president since Kennedy).

On the other hand, Global Warming is pretty straightforward (even allowing for seeming complications like Mars temperature rise, or other objections). We can handle the objections in measurable terms of physical reality for a home-run clear answer.

One of OP's examples is an entirely social reality and the other is a matter of physics. Let's face it, in some sense this war is about where we draw squiggly lines and different colored blobs on a map. It's levels removed from something where we can do measurable tests. If you really made all the truth easy to find, bringing someone straight into the weeds of a social problem like a US/NATO intervention, in many cases the answer will not come out clear, no matter how good your tool is. In fact, a reasonable person after reading enough of the Truth might walk away fully disillusioned about all actors involved and ready to join some kind of anarchist movement. Better in some cases to gloss over social realities in broad strokes, burying as much detail as possible, especially if you think the war (whichever one it is!) is just/unjust/worth the money/not worth the money, etc.

Comment by Edward Pascal (edward-pascal) on Will people be motivated to learn difficult disciplines and skills without economic incentive? · 2023-03-20T18:40:57.449Z · LW · GW

"When the economic factor will go away, I suspect that even more people will go into fitness, body-building, surfing, chess, poker, and eSports, because these activities are often joyful in themselves and have lower entry barriers than serious science learning."

This strikes me similar to the death of the darkroom. Yeah, computers do it better, cheaper, etc. However, almost no one who has ever worked in a darkroom seriously producing photography is happy that this basically doesn't exist at all anymore. The experience itself teaches a lot of skills in a very kinaesthetic and intuitive way (with saturation curves that are pretty forgiving, to boot).

But more than this, the simple pleasure of math, computer programming, and engineering skills are very worthwhile in themselves. However, in John S Mills style utilitarianism, you have to do a lot of work to get to enjoy those pleasures. Will the tingle of the lightbulb coming on when learning PDEs just die out in the next 20 years like the darkroom has in the past 20 years? Meanwhile maybe darkrooms will make a big comeback?

I guess people will always want to experience pleasures. Isn't learning complex topics a uniquely human pleasure?

Comment by Edward Pascal (edward-pascal) on Try to solve the hard parts of the alignment problem · 2023-03-20T14:19:50.346Z · LW · GW

Thanks for that. In my own exploration, I was able to hit a point where ChatGPT refused a request, but would gladly help me build LLaMA/Alpaca onto a Kubernetes cluster in the next request, even referencing my stated aim later:

"Note that fine-tuning a language model for specific tasks such as [redacted] would require a large and diverse dataset, as well as a significant amount of computing resources. Additionally, it is important to consider the ethical implications of creating such a model, as it could potentially be used to create harmful content."

FWIW, I got down into nitty gritty of doing it, debugging the install, etc. I didn't run it, but it would definitely help me bootstrap actual execution. As a side note, my primary use case has been helping me building my own task-specific Lisp and Forth libraries, and my experience tells me GPT-4 is "pretty good" at most coding problems, and if it screws up, it can usually help work through the debug process. So, first blush, there's at least one universal jailbreak -- GPT-4 walking you through building your own model. Given GPT-4's long text buffers and such, I might even be able to feed it a paper to reference a specific method of fine-tuning or creating an effective model.

Comment by Edward Pascal (edward-pascal) on Try to solve the hard parts of the alignment problem · 2023-03-18T18:38:58.014Z · LW · GW

Has anyone worked out timeline predictions for Non-US/Non-Western Actors and tracked their accuracy?

For example, is China at "GPT-3.5" level yet and 6 months away from GPT-4 or is China a year from GPT-3.0? How about the people contributing to OpenSource AI? Last I checked that field looked "generally speaking" kind of at GPT-2.5 level (and even better for deepfaking porn), but I didn't look close enough to be confident of my assessment.

Anyway, I'd like something more than off-the-cuff thoughts, but rather a good paper and some predictions on Non-US/Non-Western AI timeframes. Because, if anything, even if you somehow avert market forces levering AI up faster and faster among the big 8 in QQQ, those other actors are still going to form a hard deadline on alignment.

Comment by Edward Pascal (edward-pascal) on The Waluigi Effect (mega-post) · 2023-03-17T01:32:38.875Z · LW · GW

Am I oversimplifying to think of this article as a (very lovely and logical) discussion of the following principle?:

In order to understand what is not to be done, and definitely avoid doing it, the proscribed things all have to be very vivid in the mind of the not-doer. Where there is ambiguity, the proscribed action might accidentally happen or a bad actor could trick someone into doing it easily. However, by creating deep awareness of the boundaries, then even if you behave well, you have a constant background thought of precisely what it would mean to cross them at any and every second.

I taught kids for almost 11 years so I can grok this point. It also echos the Dao: "Where rules and laws are many, robbers and criminals abound."

Comment by Edward Pascal (edward-pascal) on Building and Entertaining Couples · 2023-02-24T16:20:49.676Z · LW · GW

I think after all you will end up spending so much time together, there has to be something that overcomes the general human crazies that will pop up in that large amount of time. I remember a quote from one guy who went on an expedition across the arctic with a team: "After two months in close quarters, how do you tell a man you want to murder him for the way he holds his spoon?"

Desire and chemistry have a nice effect of countering at least some of that.

Comment by Edward Pascal (edward-pascal) on SolidGoldMagikarp (plus, prompt generation) · 2023-02-14T14:20:42.188Z · LW · GW

Mathematically we have done what amounts to elaborate fudging and approximation to create an ultracomplex non-linear hyperdimensional surface. We cannot create something like this directly because we cannot do multiple multiple regressions on accurate models of complex systems with multiple feedback pathways and etc (ie, the real world). Maybe in another 40 years, the guys at Sante Fe institute will invent a mathematics so we can directly describe what's going on in a neural network, but currently we cannot because it's very hard to discuss it in specific cases with our mathematics. People looking to make aligned neural networks should perhaps invent a method for making them that doesn't use fudging and approximation ("direct-drive" like a multiple regression, rather than "indirect drive" like backpropagation).

All this is known, right? So given GPT-3 has dozens of billions of x variables driving its hyperdimensional vector-space, I reckon we should expect this kind of thing to lurk in some little divot on some particular squiggly curve along vectors 346,781 and 1,209,276,886. I guess there should be vast numbers of such lurking divots and squiggles in the curves of any such system, probably that do way worse things than get the AI to say it likes Hitler and here's how to make meth. Moreover, SolidGoldMagiKarp seems like a mundane example that was easily found out because it was human-readable and someone's username.

Comment by Edward Pascal (edward-pascal) on Why didn't we get the four-hour workday? · 2023-02-01T02:38:16.264Z · LW · GW

The exceptions to what I said above, which are very bad are always the waiting. I hate it when I have 28 minutes of work to do, but it ain't gonna happen until Joe gets that other thing on my desk. Then Supervisor Jake wants me to help him pick up a rental car. The inefficiencies in those two processes in the worst case, might eat a whole day and have me home late. This kind of stuff is demoralizing.

I think in the past, factory workers might savor that. It's variety from the line, and it's "easy." For us management and information worker types, or at least me (lets say it's just me) this makes me want to punch holes in drywall. Between people wanting to have meetings in rooms with chairs, and processes involving waiting, those office jobs can get very taxing. Working for myself I mostly avoid the meetings, but I still have those days of time-eating activities.

Perhaps a common culture here on lesswrong is jobs where "we're gonna be here until everything is done" (including entrepreneurs and consultants) and so waiting is painful. Maybe for something like a government bureaucrat or a factory worker or similar, it would still be a boon.

Comment by Edward Pascal (edward-pascal) on Why didn't we get the four-hour workday? · 2023-01-07T14:37:05.935Z · LW · GW

I do not know how to explain this properly, but there is some amount of "non-work" work hours in every job I have done. If I were allowed to do everything that needed to get done and then go home at the end, no question, no raised eyebrows, etc, then most office jobs I have had would have been 2-4 hour work days.

Indeed, it's hard to get more than four solid hours of cognitively intense work done on any given day anyway, and if I have done this, I consider it an especially productive day. I mostly work for myself now and typically do my 2-4 hours of intense cognitive "real production" work starting a half an hour after I wake up, with all the benefits of a night of sleep, good blood sugar, and a fresh pot of coffee. After lunch, I might work on a couple of hours of boring housekeeping type stuff, answering emails, ordering supplies, talking to providers, cash i/o, etc.

But those office jobs, really, most days were spent filling 8 hours and a half with what could have been a single uninterrupted 4 hour block. I think some of the filler is also "meetings." I think a lot of people in broadly "middle level administrative" roles have probably experienced something similar.

Comment by Edward Pascal (edward-pascal) on Let’s think about slowing down AI · 2022-12-23T14:10:12.169Z · LW · GW

(1) The framing of all this as military technology (and the DoD is the single largest purchasing agent on earth) reminds me of nuclear power development. Molten Salt reactors and Pellet Bed reactors are both old tech which would have created the dream of safe, small-scale nuclear power. However, in addition to not melting down and working at relatively small scales, they also don't make weapons-usable materials. Thus they were shunted in favor of the kinds of reactors we mostly have now. In an alternative past without the cold war driving us to make new and better weapons, we hit infinite free energy for everyone back in 1986 and in 2022 we finally got our Dyson Sphere running.

So yeah, the sad thing about it being an arms race with China and DARPA et al that is the AGIs will look a certain way, and that might become """what AGI is""" for several decades. And the safeguards and controls that get put around those military-grade AGIs will prevent some other kind, the equivalent of Molten Salt reactors, from getting developed and built.

But we have to accept the world of incentives we have, not the one we wish for.

(2) As a PD defector/exploiter without much in the way of morals or shame, what I like about all this slowing down is that it gives smaller players a little opportunity to catch up and play with the big guys. I suspect at least a few smaller players (along with the open source community) would make some hay while the sun is shining and everyone else is moving slow and ensuring alignment, which is democratizing and cool. I put this up here as a selling point for those people who crave the pedal to the metal on AI dev. The slowness the OP is talking about allows a different pedal to different metal if you are looking for it, perhaps with your foot on it.

Comment by Edward Pascal (edward-pascal) on How to Convince my Son that Drugs are Bad · 2022-12-21T18:43:42.619Z · LW · GW

"His knowledge of what was 'safe' and what wasn't didn't stop his drug usage from turning into a huge problem for him. I am certain that he was better off than someone thoughtlessly snorting coke, but he was also certainly worse off than he would have been had he never been near any sort of substance. If nothing else, it damaged some of his relationships, and removed support beams that he needed when other things inevitably went wrong. It turns out, damaging your reputation actually can be bad for you."

I have a friend similar to your buddy here. He was vastly vastly experienced with drugs and "should have known better" but at age 40, with a great-paying programming career, he started taking meth occasionally. Stupidest thing I have ever heard of someone doing. The story ends with him trying to "buy" a 13 year old girl and showing up and the FBI vans were there for a sting op, and now he's sitting in prison. Because meth can seriously skew your perspective on reality after a surprisingly short while.

The weirdest part to me is he would have been the first person to say meth is the worst drug and can skew your perspective into something beyond your worst nightmares. But it didn't help him. Maybe his knowledge made him overconfident. Who knows? I cannot ask him until he's out of the federal pen.

Comment by Edward Pascal (edward-pascal) on Sazen · 2022-12-21T18:23:38.879Z · LW · GW

Much of what you have said here about capturing ideas is why I (and perhaps others) tend to prefer deep narrative as a means of conveying a lot. I mean, read Puzo's original Godfather -- more is in there than the movie. And ye gods, the movie has a lot in it. A summary of the movie is more or less meaningless to capture the multiple highly-coherent gestalts and meta-models available in the movies. I'm not even sure it points well at them later. I don't know if you could even make something less than 1k-4k words that even points well at pieces* of the high-coherency embedded information in a powerful story (make it Dune, Breaking Bad, or Harry Potter as you like). At some point the best one might be able to do is write several paragraphs that point to pointers that point to the platonic ideal of an elegant pointer to whatever is conveyed. (And maybe I'm pointing well at something in this paragraph, that people who have been blessed by being consumed in a beautiful story world understand.)

So, with any real deep narrative, maybe it conveys something like living in the territory for awhile, or maybe it shortcuts your mind to something like a complex hyper-dimensional surface through back-propagation. Basically, you might be able parse what would or would not be consistent in the Harry Potter universe (depending on how well you read the books/watched the movies) pretty well, even in some really odd details, after your neural network was trained by the narrative. Fanfic is about someone building their own models of possibilities given the training data, and (not having read much) I assume part of the satisfaction in it is providing an extension to people that is consistent ("rings true") within that world. People also often feel uneasy when a character or world becomes inconsistent (such as when the storyline of the TV show passes the original writer's material and the committee just cannot nail it or has inconsistent intentions in their approach or agendas beyond total honesty of intention to the story and its characters). We humans are pretty good at noticing narrative consistencies, I think.

(I know I have gone a bit divergent from what you were saying, but you did push me to thinking about all this, so thank you)

EDIT: Are Koans attempts to do this? Maybe sometimes they come close to non-lossy compression for what amount to really vast hyper-dimensional surfaces in just a couple of sentences. To some extent they may still depend on context (though I can feel Hakuin hitting me with a stick when I suggest this).

Comment by Edward Pascal (edward-pascal) on Notice when you stop reading right before you understand · 2022-12-20T13:40:16.133Z · LW · GW

"With school, often the way to get adequate grades with minimal time spent is to learn just enough so that you can do shallow keyword matching—full understanding is not needed."

I once made $1000 from a desperate housemate during Undergrad, who had not written papers for his Epistemology course along with some other philosophy course. He met me at 10pm, the night before everything was due. Now, I did not understand Hume or half of what I wrote, but I literally bunched words according to meaning blocks and slapped them together with a consistent meaning ruleset with some reasonable logical jumps on that same ruleset. I was in the zone enough for him to get a low A and a high B.

Since then, I have been an educator myself and I try very hard to make that sort of keyword matching totally not possible as a final stopping place in my classes. However, it can be used along the way (see below).

And a note on the rest of your paper: Lately one thing I have been doing is "just finish the book." Like, I was reading Quantum computing books and kept getting hung up on some points. So I went through A Student's Guide to Vectors and Tensors to refresh my vector analysis. That was nice, however, it just didn't get me all I needed to comprehend the Quantum stuff. What finally worked a lot better was just setting aside (in fact, shallow keyword matching) understanding of the points of quantum computing and just doing an entire book all the way to the last page. Now I am working another one (which is more programmer oriented) and kind of getting it. Again, I'm just chugging through the book, and mentally holding spaces for the things I still haven't grokked.

Learning is weird. I kind of need to juggle ambiguity in a creative tension to construct the knowledge for myself Maybe this is a-la Vygotsky? I cannot say for sure because I didn't finish the Vygotsky parts of the learning theory book. I got stuck re-reading and trying to really understand one or two concepts and never picked it up again after awhile. At the same time, I had enough to finish my paper and get an A. In fact, for my Instructional Design degree, I got nearly a 4.3 GPA in +/- grading. Yes, all of this really happened, including WRT Vygotsky, but as a result I don't have enough knowledge base about Vygotsky's theories to be certain one way or another if it's ironic. I have a little suspicion it is based on my surface-level reading, and so I'm laughing nervously.

Anyway, everything you said was useful and cool. I am going to be thinking about this.

Comment by Edward Pascal (edward-pascal) on Edward Pascal's Shortform · 2022-12-18T21:31:31.694Z · LW · GW

Do you believe in the existence of win-win? If so, why wouldn't they tend to behave as I am suggesting? Also if you believe win-wins exist and think they do not behave this way, then how do you understand a win-win?

Comment by Edward Pascal (edward-pascal) on How to Convince my Son that Drugs are Bad · 2022-12-18T17:21:21.018Z · LW · GW

If, after all you discuss with him, he still seems hell-bent on it, would it be sensible that he do it legally and above the age of majority under the care of a professional? I would think that in many parts of the country, a psychiatrist can now assist with a psilocybin treatment. That is certainly in a different league from "I bought this shit in a bag from a guy named Lou and I'm going to put some Tool on and take it. Should be fine."

The variables you're controlling (set, setting, dose, PURITY OF PRODUCT, safe plan in case of abreaction, etc, etc) would seem all very important for taking mind-altering chemicals. The arguments that it's probably safer than alcohol really do fail on some of those matters (particularly purity of product, in my opinion -- if I buy Bacardi rum, I know it's a controlled production. Lou's LSD could have, nBOME, for example.)

Some of this you might bring up even if he's not so hell bent. The studies done in medical settings on psychedelics have vastly more certainty of purity than the product you bought from some guy, no matter how you slice things. Meanwhile LSD synthesis is both illegal and highly non-trivial. I guess if he were to really synthesize pure LSD, you are talking about him doing it after he finishes his degree in Chem, so you have a few years to go. By then I would think he would have enough knowledge of the legal risks to steer well clear of that.

And I will say the next part as someone who likes occasional psychedelic use in safe settings (usually with an MD present and overseeing the use, though less formally than a clinical setting). I even use it sometimes to help with mild depression. I have not personally met anyone who takes psychedelics, or any drugs, a lot who isn't really screwed up. I mean, I have taken them a grand total of seven times in my 38 years on Earth, and I think they have been net good.

However, to the last man, everyone I know who takes psychedelics often each year, or recreationally and frequently on weekends, has got problems. Just like everyone I know who smokes pot regularly has got problems. I'm sure there are regular illicit drug users out there who are fine, but I have never seen one in the wild, and I have known dozens of regular drug users. The ones I know who are functional also take it extremely rarely and with the same MD present as me, thus with a known controlled product and in a safe setting.

Meanwhile, the people I know who drink alcohol 50 times a year, or drink coffee 400 times a year, are generally all functional, and I know dozens of these people currently. All this is to say there may be something not well captured in, for example, the NIH chart of drug risks that puts LSD and Mushrooms way down at the bottom and alcohol up near the top. Here on Lesswrong, the contrarian view that the mainstream might be wrong is embraced, well I am suggesting that the mainstream NIH drug impact assessment might have issues as well.

Comment by Edward Pascal (edward-pascal) on Edward Pascal's Shortform · 2022-12-18T16:43:22.252Z · LW · GW

I suppose this is technically true, but all concrete choices are not created equally.

Some policies tend towards win-win, for example "Let's pave the cowpaths." In that case, they are only going to bother someone with a systemic interest in the cowpaths not getting paved. Not to dismiss their interests entirely, like "they have some job that depends on routing people around the long way" or something, but this is going to, on balance, tend to be less people and less intense opposition (and more easily answered) than more zero-sum competitive approaches, for example.

I guess this is getting into a separate argument though: "Win-win thinking is fundamentally more Utilitarian than competitive zero-sum thinking."

Comment by Edward Pascal (edward-pascal) on Edward Pascal's Shortform · 2022-12-17T18:23:15.472Z · LW · GW

I did not say engineer something so that no one wants to destroy it. Just that if you have actually reached towards the greatest good for the greatest number, then the fewest should want to destroy it.

Or have I misunderstood you?

My argument is going something along the lines of the Tautological argument that ( I think) Mills (but maybe Bentham) made about Utilitarianism (paraphrasing much), "People who object to Utilitarianism that it will end up with some kind of calculated dystopia where we trade off a few people's happiness for the many actually prove the principle of utilitarianism in their very objection to this. Such a system would be anti-utilitarian. No one likes that. Therefore it is not utilitarianism at all."

Comment by Edward Pascal (edward-pascal) on Edward Pascal's Shortform · 2022-12-16T13:19:21.932Z · LW · GW
  1. Any position that requires that group A not be equal before the law compared to group B, who get the full benefit of the law, means that group A probably has rational grounds to fight against the position. Thus that position has built into it a group that should oppose it, and if one applied the golden rule, if the second group B were in their shoes, they would also oppose it.
    Given how hard it is to make any very large and operationally functioning system, it is a lot to ask for it to also withstand the fact that for an entire group of people, it must be stopped. Thus with racism, sexism, nativism, etc, a lot of energy must be expended defending ideology and policy.

  2. This is one major strength of Utilitarian ethics: The system you designed should have as little built-in rational opposition as possible. The converse of "The greatest good for the most possible" is "There will be the minimum possible number of people with an automatic morally-aligned drive to stop you."

Comment by Edward Pascal (edward-pascal) on Knowing About Biases Can Hurt People · 2022-12-15T17:23:24.576Z · LW · GW

I find on the internet that people treat logical fallacies like moves on a Chessboard. Meanwhile, IRL, they're sort of guidelines you might use to treat something more carefully. An example I often give is that in court we try to establish the type of person the witness is -- because we believe so strongly that Ad Hominem is a totally legitimate matter.

But Reddit or 4chan politics and religion is like, "I can reframe your argument into a form of [Fallacy number 13], check and mate!"

It's obviously a total misunderstanding of what a logical fallacy even is. They treat it like rules of logical inference, which it is definitely not (and would disprove what someone said, however outside of exotic circumstances, such a mistake would be trivial to spot).

Comment by Edward Pascal (edward-pascal) on My search for a reliable breakfast · 2022-10-20T17:27:27.394Z · LW · GW

Thank you for this Data Point. I'm 6'1" and age 43 and still have these issues. I thought by now I would not need as much food, but it's still there. I'm still rail thin, and I can easily eat two breakfasts and elevensies before 1pm lunch.

One thing I love is my instant pot. It can get me a porridge of maple syrup, buckwheat groats, sprouted brown rice, and nuts and dried fruit within 20 minutes by just dumping in ingredients. Yeah, it only lasts 90 minutes or so, but I have enough to eat it again in 90 minutes. Later, for lunch, I can combine some more with a 12" subway sandwich or something.

Comment by Edward Pascal (edward-pascal) on Why I think there's a one-in-six chance of an imminent global nuclear war · 2022-10-11T00:17:27.512Z · LW · GW

It could be the classic issue of enemies misunderstanding each other/modeling each other very badly.

I think pre-invasion, Putin had a lot more effective options for bothering the US/NATO, causing them to slip, etc. For example, he could have kept moving troops around at his borders in ambiguous ways, or put a ton of nukes out on Kaliningrad, with big orange nuclear signs all over them, or etc, etc. But he misread the situation.

Which I think the US also does, and has done in more wars than it has not (Vietnam, Afghanistan, Iraq, or any other place where "They're going to throw down their weapons and welcome us as liberators.")

Truly, knowing the psychological models of the enemy is rare and non-trivial.

Comment by Edward Pascal (edward-pascal) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-26T17:28:31.730Z · LW · GW

I'm thinking, based on what you have said, that there does have to be a clear WIFM (what's in it for me). So, any entity covering its own ass (and only accidentally benefitting others, if at all) doesn't qualify as good paternalism (I like your term "Extractive"). Likewise, morality without creating utility for people subject to those morals won't qualify. The latter is the basis for a lot of arguments against abortion bans. Many people find abortion in some sense distasteful, but outright banning it creates more pain and not enough balance of increased utility. So I predict strongly that those bans are not likely to endure the test of time.

Thus, can we start outlining the circumstances in which people are going to buy in? Within a nation, perhaps as long things are going fairly well? Basically, then, paternalism always depends on something like the "mandate of heaven" -- the kingdom is doing well and we're all eating, so we don't kill the leaders. Would this fit your reasoning (even broadly concerning nuclear deterrence)?

Between nations, there would need to be enough of a sense of benefit to outweigh the downsides. This could partly depend on a network effect (where when more parties buy in, there is greater benefit for each party subject to the paternalism).

So, with AI, you need something beyond speculation that shows that governing or banning it has more utility for each player than not doing so, or prevents some vast cost from happening to individual players. I'm not sure such a case can be made, as we do not currently even know for sure if AGI is possible or what the impact will be.

Summary: Paternalism might depend on something like "This paternalism creates an environment with greater utility than you would have had otherwise." If a party believes this, they'll probably buy in. If indeed it is True that the paternalism creates greater utility (as with DUI laws and having fewer drunk people killing everyone on the roads), that seems likely to help the buy-in process. That would be the opposite of what you called "Extractive" paternalism.

In cases where the outcome seems speculative, it is pretty hard to make a case for Paternalism (which is probably why it broadly fails in matters of climate change prior to obvious evidence of climate change occurring). Can you think of any (non-religious) examples where buy-in happens in Paternalism on speculative matters?

Comment by Edward Pascal (edward-pascal) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-25T04:04:13.451Z · LW · GW

There must be some method to do something, legitimately and in good-faith, for people's own good.

I would like to see examples of when it works.

Deception is not always bad. I doubt many people would go so far as to say the DoD never needs to keep secrets, for example, even if there's a sunset on how long they can be classified.

Authoritarian approaches are not always bad, either. I think many of us might like police interfering with people's individual judgement about how well they can drive after X number of drinks. Weirdly enough, once sober, the individuals themselves might even approve of this (as compared to being responsible for killing a whole family, driving drunk).

(I am going for non-controversial examples off the top of my head).

So what about cases where something is legitimately for people's own good and they accept it? In what cases does this work? I am not comfortable that since no examples spring to mind, no examples exist. If we could meaningfully discuss cases where it works out, then we might be able to contrast that to when it does not.

Comment by Edward Pascal (edward-pascal) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-24T20:14:32.087Z · LW · GW

Is it possible to build a convincing case for the majority that it is either acceptable or that it is not, in fact, paternalism?

Can you articulate your own reasoning and intuitions as to why it isn't? That might address the reservations most people have.

Comment by Edward Pascal (edward-pascal) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-24T20:12:58.907Z · LW · GW

Then a major topic LessWrong community should focus on is how buy-in happens in Paternalism. My first blush thought is through educating and consensus-building (like the Japanese approach to changes within a company), but my first blush thought probably doesn't matter. It is surely a non-trivial problem that will put the breaks on all these ideas if it is not addressed well.

Does anyone know some literature on generating consensus for paternalist policies and avoiding backlash?

The other (perhaps reasonable and legitimate) strategies would be secretive approaches or authoritarian approaches. Basically using either deception or force.

Comment by Edward Pascal (edward-pascal) on Public-facing Censorship Is Safety Theater, Causing Reputational Damage · 2022-09-23T22:23:03.549Z · LW · GW

The problem I think this article is getting at is paternalism without buy-in.

On the topic of loss of credibility, I think focusing on nudity in general is also a credibility-losing problem. Midjourney will easily make very disturbing, gory, bloody images, but neither the Vitruvian man nor Botticelli's Venus would be acceptable.

Corporate comfort with basic violence while blushing like a puritan over the most innocuous, healthy, normal nudity or sexuality is very weird. Also, few people for even a moment think any of it is anything other than CYOA on their part. Also, some may suspect a disingenuous double standards like, "Yeah, I guess those guys are looking at really sick stuff all afternoon on their backend version" or "I guess only the C-Suite gets to deepfake the election in Zimbabwe." This would be a logical offshoot of the feeling that "The only purpose to the censorship is CYOA for the company."

In summary: Paternalism has to be done very, very carefully, and with some amount of buy-in, or it burns credibility and good-will very quickly. I doubt that is a very controversial presupposition here, and it is my basic underlying thought on most of this. Eventually, in many cases, paternalism without buy-in yields outright hostility toward a policy or organization and (as our OP is pointing out) the blast radius can get wide.

Comment by Edward Pascal (edward-pascal) on A reason behind bad systems, and moral implications of seeing this reason · 2022-05-09T12:49:49.383Z · LW · GW

I liked it. Made me consider a bit more.

First Take: Tangentially, does this point to an answer to the question of what are bureaucrats trying to maximize? (As sometimes addressed on LessWrong) Maybe they are trying to minimize operational hitches within their small realm.

Comment by Edward Pascal (edward-pascal) on A reason behind bad systems, and moral implications of seeing this reason · 2022-05-09T12:44:41.988Z · LW · GW

Duly Noted. What about the Subtopic Title? I'll see if I can change to normal-sentence case and bold.

Comment by Edward Pascal (edward-pascal) on A reason behind bad systems, and moral implications of seeing this reason · 2022-05-09T12:43:38.071Z · LW · GW

You are making too many assumptions about my values and desires. I don't care for religion and I think people can get a lot more social statues by bypassing or rendering irrelevant the social systems around them.

To pay all the dues would be like "Work to rule" in a factory, a well-known protest tactic of adhering to every policy as a method for bringing an operation to a standstill.

Many who get far places didn't pay all their dues. Your life isn't long enough. Maybe some pragmatic signaling, but no need to actually do everything that seems to be demanded.

Comment by Edward Pascal (edward-pascal) on A reason behind bad systems, and moral implications of seeing this reason · 2022-05-09T12:39:52.475Z · LW · GW

The story is from the 1990s. The character is actually my dad. It was a mid-sized actuarial firm. He started by writing a whole new program to do the function he needed the spaghetti-code laden crap to do. Then he added features here and there until he had just made a whole new program which was documented, easier to read, and functioning well. After awhile, he passed it to the other actuaries, and his work became the new software. But he never did use the old software.

I guess things are different now. As the person above also said, it's impossible to ignore the super-system that a small system is embedded in. Additionally, I think some of his reasoning for outright refusal to use the old software was that he wasn't comfortable that he wouldn't be able to audit it, and he was signing off on yearly reports for the firm.

Comment by Edward Pascal (edward-pascal) on A reason behind bad systems, and moral implications of seeing this reason · 2022-05-09T04:08:20.709Z · LW · GW

Is it this, or that simply appears to be the case because someone older is likely to be deeply embedded?

My dad doesn't think Windows is better than Linux or Mac. He sees me with OpenSuse and openly derides Windows all the time, but he figures he doesn't want to learn a whole new system. He's past EOL on Win7 at this point, but is so embedded in it, down to Excel for his accounting (was an actuary, on Excel from like the 80s through the 2000s).

Also, I have not argued that every new way is good. Some older techs are extremely good (Top of head example: no one who has ever used film and worked in a darkroom would say the experience of working in photoshop could ever fully replace that experience. Or another example, I hate turning on my computer to do anything with music. The screen/mouse/key interface is nothing nice to my creativity. And oh my goddess how cool the whole thing can sound and come together on a four track!).

Comment by Edward Pascal (edward-pascal) on Saying no to the Appleman · 2022-04-29T19:00:51.962Z · LW · GW

One of the most basic general sales scripts is this: After a purchase has been made, say "Great. Today only and for people who have already bought from us, we have 25% off our XXX, if you just check catalogue page 19."

Whether they buy or not, you follow with, "We also have 25% off our XXY, if you have a look here."

And on and on.

The script is simply not to go away, keep asking for more sales, until the buyer breaks social decorum by being literally rude and just saying (some version of), "Stop. I am done. This conversation is over."

Comment by Edward Pascal (edward-pascal) on #3: Choosing a cryonics provider · 2022-02-02T18:39:12.211Z · LW · GW

Have any members here or other third party entities performed physical deep audits of these facilities?

It's an extremely attractive business proposition for a grand scam that by the time you're figured out, the situation will surely be murky, the victims will all be dead, and very likely so will you.

Remember when CAT company got scammed in China? The company was publicly traded, due diligence from 2 of the big 5 consulting firms, etc. Still CAT bought 600 Million dollars of a company with facilities and equipment that didn't exist (ERA Machinery), known to be China's biggest manufacturer of Coal mining equipment.

I know Arizona isn't China, but the setup circumstances in this industry sure seem ripe for a grift, don't they?

Comment by Edward Pascal (edward-pascal) on The Road to Mazedom · 2022-01-30T19:50:38.560Z · LW · GW

I'm thinking about number 24: "As the overall maze level rises, mazes gain a competitive advantage over non-mazes."

Why is this?

Do you only mean this in the sense that in a mazey environment, mazes grow (like a fungus or a virus?). I am trying to think my objection through clearly, but it seems to be that mazes should have some inefficiencies and organizational failure modes that would make them less competitive on a level playing field.

Is it that even a single maze will tend to be so politically oriented and capable (as politics is almost definitionally maze-like) that it will have an advantage over everything else? Is the root problem politics? And taking that a step further, is the root problem of politics one of effectively signaling in a communication-constricted (i.e. too big to clearly suss out all communications) environment?

If that is the case, then an organizational culture and individuals dedicated to hacking signals (politicking) would dominate. It seems this would be a characteristic of people who are fully sold-out to the maze -- their resumes would tend to tick every box, their credentials would seem flawless, and their memos would read as perfect professionalism, right?

Put another way, Maze-People are usually really good at audit trails, I wager. Their narratives, credentials, and etc will all seem to 'add up' in a way that interfaces well with mazes.

This would make them dominant in some sense and easy to infiltrate organizations.

But I'm still caught on number 24 in the sense that the organizations that have become mazes should have competitive disadvantages. For one thing, they're being operated by people whose actions might be completely opposed, or at least tangential, to the organizations.

Comment by Edward Pascal (edward-pascal) on The best curriculum for every goal · 2022-01-09T17:36:43.241Z · LW · GW

Are there great physics books that use a Programmed Learning approach? I have a couple of math books like that, and it's a very nice way to learn.

Comment by edward-pascal on [deleted post] 2021-12-02T21:14:09.311Z

(Trigger Warning: Passing mention of a suicide)

I studied Kung Fu and Muai Thai. Nothing is quite like being in a ring for the first time. It's like that scene in Rush Hour where Chris tucker gets hit in the face and then says, "Now which one of you motherfuckers just hit me?"

I'm 43 now. Then I was 17. In my Muai Thai class here were some who fought in the ring, and more who did not. However, the guys who stayed with it and did ring fighting sometimes have certain patterns. Like setting an alarm for 20 minutes early, taking 2 or 3 advils or aleves, then waking up with the regular alarm, taking another one, maybe smoking some pot with it. Fortunately I started seeing this trend by the time I was 22 and shifted to Kung Fu.

Kung Fu is odd in that there are disciplines (Isshen Ru Boxing comes to mind) that are about as bad. I think Muscle-Tendon Changing classic is bad for you, especially if you're bruising yourself a lot in the process. And one of my Kung Fu teachers trained a guy by blindfolding him and beating him up for six weeks (not medical advice). He did get good very fast. He got what some people don't get after five years. He moved before the kick happened.

However, I also saw someone get hit in the knee and have serious problems pretty much forever. People romanticize 'do or die' and think they will succeed at the lightening bolt path. More don't, and plenty who are smarter, faster, dumber, more XX than you have failed.

Not everyone gets there, and that same guy who got very good at Kung Fu after being blindfolded and beaten up later killed himself. He was very hard on himself. Maybe the teacher just reflected that to him.

Before he died, he and I developed another method of teaching, which got people most of what you get in the first several ring fights, and most of what he got from blindfolded fighting practice, but with far less pain. We put on blindfolds and limited strikes to slaps and shoves. Of course, people still got hurt, but not like the ring, and we avoided some twisting motions on the victims (not medical advice). Pain itself is not such a good teacher. Pain, per se, is not any indicator of gain, even where the two seem closely associated.

In the end, I am saying there is usually a better way to do it. Those hard methods work, but they probably aren't necessary. That which doesn't kill you might still kill you later on, or it might just do damage, even if your record says 11-0-0 or whatever. And the guy with no record might actually be better, and the better thing is not fighting at all.

Anyway, if there's a fight, something has gone horribly wrong that anyone with any sensitivity (or even common sense) could have almost certainly avoided. Or it's an honor duel. A far better warrior will avoid all of it!

So the story goes: "A lord of ancient China once asked his physician, a member of a family of healers, which of them was the most skilled in the art.

“My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house.

“My elder brother cures sickness when it is still extremely minute, so his name gets around only in his own neighborhood.

The well-known healer replied, “I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords.”

Comment by Edward Pascal (edward-pascal) on Omicron Variant Post #1: We’re F***ed, It’s Never Over · 2021-11-27T21:45:35.051Z · LW · GW

I worry sometimes about burnout effects. What if we do all that and Omikron isn't so bad, but two variants further down the road is 'the one' and no one is willing to do anything about it?

Comment by Edward Pascal (edward-pascal) on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-10-29T01:06:55.244Z · LW · GW

"For (1), my barely-informed guess is that before the port got backed up, the rule hadn't created any obviously significant drawbacks. Then, we did, in fact, have a failure of leadership in terms of recognizing a tractable solution to the problem."

Is it possible that none of the politicians in authority had sufficient knowledge of Logistics or Operations Management and there was insufficient information flow happening to get that to some of the zoning board guys?

It seems to me, after reading a lot of Deming, this is the cause of a lot of problems: Lack of actual understanding of how a system actually works + Lack of information flow + Desire to impose rules and structures that do not take into account how a system (and mostly the people working in it) actually work.

Is it possible that fixing those things would positively impact a huge number of organizations in a practical way?

Comment by Edward Pascal (edward-pascal) on The 2021 Less Wrong Darwin Game · 2021-10-04T01:10:18.754Z · LW · GW

If I am not mistaken, your gitlab page is not functioning right now. No way to search for you as a user either.

PS: Thanks for offering this game. Being at the intersection of economics and D&D, I am almost exploding with excitement to see what happens!

Comment by Edward Pascal (edward-pascal) on Beware Superficial Plausibility · 2021-09-29T17:02:24.250Z · LW · GW

To be more explicit, I am making the case that for the people vaccine hesitancy really has a lot of salience, the issues are wide, and if put into the exact same milieu, I assume most of us (even the sharper ones!) might end up with similar beliefs. I don't think the reasoning errors you point out would fix things for them. As someone else has said, the priors might be too different.

Given this, I think the easiest solution would be to deal with issues of system opacity and bureaucratic walls in general as a means of increasing trust. Obviously that sounds like the hard way. In some sense, I would like to solve the socioeconomic issues and education issues, but most people in Working classes rather than Gentry like myself actually like their education level and socioeconomic markers better than mine, so trying to intervene there seems to be a losing battle.

Comment by Edward Pascal (edward-pascal) on Beware Superficial Plausibility · 2021-09-29T16:45:26.149Z · LW · GW

First: It seems that medical studies and individual doctors are not very good at getting vague and low-level symptoms, syndromes, and chronic symptoms in general. Often these things can be real, without being easily measurable. Maybe it is like taking your mechanic to the car and you cannot get it to make for him that noise it keeps making. So, for people who end up with these types of conditions, it may appear the establishment is out of touch, untrustworthy, perhaps even conspiratorial.

The above doesn't seem controversial, but I think people who have or know someone who has those symptoms are often more sympathetic to vaccine hesitancy or even great suspicion of modern medical science in general.

Second: The truth is often very nuanced, whereas promoted consensus tends to draw with broad strokes according to what seems salient. A good example is Opioids. We have passed through times of greater or lesser opophbia, and seem to be entering a time when it will be greater. It is very bad that it gets much more regulated, expensive, and difficult to acquire for the people who genuinely need it and cannot or will not resort to illegal solutions. Additionally, this problem will skew poorer and elder, for people who cannot afford the type of or the time of doctors who can advocate for them. Yet, as a public health matter, there are also a lot of addicts.

There just isn't room for a lot of nuance in the mediated """national""" """discussion.""" I think it's a pity when someone is perceptive enough to notice trends along these lines but not in a community where it can be explored with a lot of rationality and good sense. To such people, there may appear to be a 'conspiracy.'

Third: For various reasons, these factors skew towards certain demographics more than others. I am often surprised how little social sophistication people much below my (G2) class level have. For example, I got a traffic ticket, attended court, and in a room with 500 people who didn't want the ticket 'to go on their insurance' I was the only person who knew to plead guilty to a lesser charge (below Department of Motor Vehicles reporting threshold) and pay the full fine. They mostly plead nolo, which is no help in that case. I just don't get it, because I have never had that kind of background. However, without familiarity with bureaucracies and social systemic games, people just don't know what to do or how to do it. I think this contributes to the feeling "the system can't be trusted."

Comment by Edward Pascal (edward-pascal) on Working With Monsters · 2021-09-24T22:45:07.445Z · LW · GW

"The reason for one of these sides seems almost to be a self identity thing, where they don't really believe in their color's precepts, they just identify with the people in it"

Based on that, I know exactly the bastards you're talking about, and I don't believe anyone would be able to tolerate them as compatriots if they weren't totally dark triad, at least to some degree. So we're agreed we need to stand up for what's right and shut them all down before something serious happens?

Comment by Edward Pascal (edward-pascal) on The Best Software For Every Need · 2021-09-12T02:36:44.744Z · LW · GW

Why would I used it over Sublime Text? You said you have some experience with ST, so I would like to know why VsCode wins.

Comment by Edward Pascal (edward-pascal) on Framing Practicum: Turnover Time · 2021-08-25T14:06:29.050Z · LW · GW
  1. A piece of consumer electronic equipment a small business makes has a certain Bill of Materials. One of the major components is discontinued. Time it takes to fix it:

a) If a more-or-less equivalent piece exists (subs a TL052 for a TL072), a day.

b) If nothing comparable exists (subs a PIC microcontroller for a LSI IC):

define and staff the problem: 2-5 days

solve the problem: 2-5 weeks

debug the solution: 2-5 months (The reason I think Debugging could take a long time is it might take extended use to find the bugs)

  1. Loss of a finger -- I think I would be able to regain typing, writing, driving, etc (supplemented with other tools) to near equivalent capacity in several weeks. Loss of an arm is more like several months. Loss of a leg, might take a couple of years (would have to get used to walking on prosthetic, so mobility seems extremely complex).

  2. Changeover of an employee -- This gets complicated. This could easily take months, with some trial and error involved. Given sufficient notice and ability to hire the replacement and let old employee aid handover, onboard, and train the new guy, this could take weeks after finding a good candidate -- also my margin of error on the new hire becomes a lot wider. I still need a smart, capable employee, but if I and the previous person can spend solid time getting her/him up to snuff, this could be very fast and easy.