Posts

Comments

Comment by Going Durden (going-durden) on Immortality or death by AGI · 2023-09-22T08:43:03.211Z · LW · GW
  • Without AGI, people keep dying at historical rates (following US actuarial tables)

Im not entirely convinced of this being the case. There are several possible pathways towards life extension including, but not limited to the use of CRISPR, stem cells, and most importantly finding a way to curb free radicals, which seem the be the main culprits of just about every aging process. It is possible that we will "bridge" towards radical life extension long before the arrival of AGI.

Comment by Going Durden (going-durden) on Some reasons why I frequently prefer communicating via text · 2023-09-22T08:29:53.615Z · LW · GW

possibly, but is that not basically a No True Rationalist trick? I do not see a way for us to truly check that, unless we capture LW rationalists one by one and test them, but even then, what is preventing you from claiming: "eh, maybe this particular person is not a Real Rationalist but a Nerdy Hollywood Rationalist, but the others are the real deal," ad nauseam?

I definitely agree that people who consider themselves Rationalists believe themselves to be Actual Rationalists not Hollywood Rationalists. This of course leads us to the much analyzed question of "why aren't Rationalists winning?" The answers I see is that either Rationality does not lead to Winning, Or the Rationalists aren't Actual Rationalists (yet, or at all, or at least not sufficiently). 

A major case in point is that Rationalists mostly failed to convince the world about the threat posed by unrestricted AI. This means that either Rationalists are wrong about the AI threat, or bad at convincing. The second option is more likely I think, and I wager the reason Rationalists have a hard time convincing the general public is not because the logic of the argument is faulty, but because the delivery is based on clunky rhetoric and shows no attempts at well engineered charisma.

Comment by Going Durden (going-durden) on Luck based medicine: angry eldritch sugar gods edition · 2023-09-20T10:07:43.205Z · LW · GW

The Stevia-drink issue is likely psychological in nature not blood-sugar related. You would have to be tricked by a third party to drink a stevia soda unknowingly, and inversely, be tricked into drinking sugary soda while thinking it is stevia based; then compare the results.

In my own diet journey I noticed similar trend: knowingly eating or drinking substitutes of things I like makes my subconscious throw a tantrum and demand the real thing anyway. I think it is more about self-resentment over being tricked, than the actual taste or content.

Just giving up the thing completely, both the real thing and substitute hurts more at first, but makes it easier to form a habit (for example, replacing soda not with stevia soda but with plain water). Some minds find purposeful "asceticism" of a diet easier than "pretend abundance" of the replacement products.

Comment by Going Durden (going-durden) on Some reasons why I frequently prefer communicating via text · 2023-09-20T09:46:16.580Z · LW · GW

some counter-arguments, in no particular order of importance:

  1. Verbal communication is quite often more succinct, because it is easier to exhaust the vocal medium, and you can see in real time your conversationists getting bored with your rambling.
  2. Verbal communication allows far more nuance carried with tone, body language, and social situation, thus often delivers the message most clearly. I find it most useful when discussing Ethics: everyone is a clinical utilitarian when typing, but far more humanistic when they see the other person's facial reaction to your words.
  3. Rhetoric and charisma do not carry well over text. Most Rationalists consider it beneficial, right until the point where they need to explain something, or convince non-Rationalists and completely lack the tools to do so. Avoiding the use of verbal rhetoric and not training your in-person charisma is the surefire way to become very unconvincing to the general audience: case in point, every attempt by explain AI Risk to "muggles" by somewhat introvert and dry-talking Rationalists.
  4. Related to point 3: conversational charisma is the main tool used by human males to woo women. By not practicing conversational charisma, Rationalists ensure they will breed themselves out of existence.
  5. Most child-rearing and education is oral communication. Without practicing it, the Rationalist will not make a good parent or a teacher, and thus, from civilizational perspective, had squandered his rationality.
  6. Rubberducking: saying things out-loud quite often leads to epiphanies, especially negative ones ("wow, my cherished idea sounds really dumb when I say it out, loud."). Writing down, and then reading your own ideas often leads to an emotional feedback loop in which you reinforce your own conviction rather than nit-picking your own idea. This leads to...
  7. Oral communication avoids the risk of Rabbit-Holes. When writing, uninterrupted, it is easy to accidentally pick a logical mistake as the crux of your whole argument, and waste hours exploring it. In conversation, your partner/opponent can snip that in the bud. 
  8. Op-Sec. Oral conversation is far less likely to get you in trouble for the things you say, unless you are being recorded. Meanwhile, a text based conversation, especially on a social platform is a Sword of Damocles always hanging over your head. Say the wrong thing, and at worst a dozen people will consider you an ass. Write and post the wrong thing, and you might, decades from now, lose your job, your social standing, or even your life. An innocent comment today might make people cancel you in 2040, or a vengeful Basilisk mulching you in 2045.
Comment by Going Durden (going-durden) on Eugenics Performed By A Blind, Idiot God · 2023-09-20T09:21:18.125Z · LW · GW

There is also the fact that we already are, effectively, controlling our own genetic pressures through culture and civilisation. Our culture largely influences our partner choice, and thus, breeding. Our medical sciences, agriculture, and urbanization takes pressure off survival. So the eugenic/dysgenic/paragenic process is in effect anyway, just... stupidly.

Some simple examples:
- agriculture pushes us to be lactose tolerant and carbohydrate dependent
- art and media dictates our sexual choices and mate choice
- education creates pressure for intelligence, but a very specific kind of one.
- in the long run, contraception methods might pressure a further evolution of our reproductive systems (ex: sooner or later, women with extremely unlikely mutations that allow them to "beat" the contraceptive pill will outbreed those who do not share such mutation. )

Im particularly interested in how our sexual culture effectively works as a secondary "blind goddess of eguenics". For likely the first time since the Neolithic (or possibly since forever), we have reached an age in which women are free to chose their male partners based on physical attraction and mental kinship, not social pressure and need for survival.  Assuming this trend continues, and we do not relapse into social conservatism, I expect a rather sudden (by evolutionary standards) shift in male choice, and thus sexual dimorphism.

Atop of that, we the rise of affordable In Vitro fertilization, we effectively are using conscious Eugenics, one specifically geared towards the needs of women and couples, rather than society at large. We are entering an age when the human male is not strictly necessary for breeding, or his offspring's survival, and thus, with the exception of the rare super-specimens who are sperm donors, men no longer fall under any evolutionary pressure, and do not really need to exist.

The decades between the moment when in-vitro becomes the norm, and the moment when artificial wombs become the norm, will be very interesting indeed.

Comment by Going Durden (going-durden) on Should rationalists (be seen to) win? · 2023-08-30T12:13:24.584Z · LW · GW

Such communities are then easily pulverized by communities who value strong groupthink and appeal to authority, and thus are easier whipped into frenzy. 

Comment by Going Durden (going-durden) on video games > IQ tests · 2023-08-30T09:31:21.097Z · LW · GW

I mostly agree with you, though I noticed if a job is mostly made of constantly changing tasks that are new and dissimilar to previous tasks, there is some kind of efficiency problem up the pipeline. Its the old Janitor Problem in a different guise; a janitor at a building needs to perform a thousand small dissimilar tasks, inefficiently and often in impractical order, because the building itself was inefficiently designed. Hence why we still haven't found a way to automate a janitor, because for that we would need to redesign the very concept of a "building", and for that we would need to optimize how we build infrastructure, and for that we would have to redesign our cities from scratch... etc, until you find out we would need to build an entire new civilization from ground up to, just to replace one janitor with a robot.
it still hints at a gross inefficiency in the system, just one not easily fixed.

Comment by Going Durden (going-durden) on 6 non-obvious mental health issues specific to AI safety · 2023-08-23T07:50:40.275Z · LW · GW

There are also some mental issues among people who know about AI safety concerns, but are not researchers themselves and not even remotely capable of helping or contributing in a meaningful way.

I for one, learned about the severity of the AI threat only after my second child was born. Given the rather gloom predictions for the future, Im concerned for their safety, but there does not seem anything I can do to ensure they would be ok once the Singularity hits. It feels like I brought my kids to life just in time for the apocalypse to hit them when they are still young adults at best, and irrationally, I cannot stop thinking that Im thus responsible for their future suffering.

Comment by Going Durden (going-durden) on Walk while you talk: don't balk at "no chalk" · 2023-08-23T07:35:21.236Z · LW · GW

I noticed I also recall conversations, podcasts etc better if I was doing some kind of a manual task at the same time (like woodcarving, or just doing the dishes). My interpretation is that focusing on a conversation while immobile is under-stimulating, and thus causes the mind to wander. If one is walking, or doing something physical, its enough physical stimulation to let the mind focus on the conversation in a "railroaded" fashion, without self-distraction.

Even deeper: it feels great to match your walking/activity pace to the emotional message of the conversation. I suppose it triggers the same reaction as ASMR. I suppose its because it lets us "act out" our emotional reaction to the words, without inappropriate gesticulation etc.

Further weak evidence that walking helps with conversational cognition:

- plenty of people, without any cultural connection between them, pick up the habit of pacing around when on the phone. 

- it was a well known technique among ancient Greek philosophers and scholars to just take their students on a walk, or even a longer trip while discussing abstract subjects. Apparently it worked very well and was done this way for centuries.

- humans evolved to be semi-nomadic persistence hunters. Walking around all day is our natural state that we evolved for, sitting down for hours is not.

Comment by Going Durden (going-durden) on video games > IQ tests · 2023-08-16T08:30:21.821Z · LW · GW

OTOH, I have a hunch that the kinds of jobs that select against "speed run gamer" mentality are more likely to be inefficient, or even outright bullshit jobs. In essence, speed-running is optimization, and jobs that cannot handle an optimizer are likely to either have error in the process, or error in the goal-choice, or possibly both.

The admittedly small sized sample of examples where a workplace that resisted could not handle optimization  that I witnessed were because the "work" was a cover for some nefarious shenanigans, build for inefficiency for political reasons, or created for status games instead of useful work/profit.

Comment by Going Durden (going-durden) on Ten Thousand Years of Solitude · 2023-08-16T08:07:49.047Z · LW · GW

Aside from the obvious reasons already mentioned, I wonder if the reason for the regress was not partially related to compound inbreeding. In most cases when technological regress happens, it tends to coincide with a genetic bottleneck as well, which I have a hunch would make the problems worse.

Comment by Going Durden (going-durden) on Cryonics and Regret · 2023-08-16T07:59:47.307Z · LW · GW

Its in the ballpark of 50k. I support a family of 4 on 10k a year, round-ish. I can save about 1k-2k a year, If we live on a very, very tight budget. It would thus take me a century to pay for cryonics just for my immediate family, if the prices do not fall quickly enough.

Comment by Going Durden (going-durden) on "Justice, Cherryl." · 2023-07-25T10:29:12.622Z · LW · GW

In Rand's defense, she does define the terms "altruism" and "selfishness" i her works, at length, from every possible angle, at nauseam. Its impossible to read more than one page of her work and still confuse her definitions for standard ones.


The confusion usually comes up through a game of telephone, when people opposed to Objectivism comment on things written by fans of Rand, without ever actually reading the source material.

Comment by Going Durden (going-durden) on "Justice, Cherryl." · 2023-07-25T10:25:10.100Z · LW · GW

Every human being is selfish, but most are also altruistic some of the time


What, in your estimation, would be a difference between actual altruism, and "altruism" done for the sake of selfish emotional fuzzies?

Lets say I pass a beggar on the street. If I give him a dollar because he needs it, its altruism. If I give him a dollar because I want to feel like Im a Good, Charitable Guy, and genuinely enjoy his thanks, then its selfishness.

About the only true altruism I can think of that is not essentially a form of egoism, is when you absolutely HATE the fact that you act charitable, and get zero pleasure from it, not even masochistically. If you so much as get a single second of warm fuzz in your heart from your charitable act, thats just roundabout selfishness. If you pay the beggar 1$ and then feel emotionally better, he is essentially your low-budget therapist, and you just performed a completely selfish act of capitalist exchange.

 

Comment by Going Durden (going-durden) on Cryonics and Regret · 2023-07-25T10:02:17.755Z · LW · GW

I truly hope the cost of cryo falls rapidly in the next few years. A back-of-the-napkin calculation I did shows that if I wanted to pay forward for an option to cryopreserve my children (should they ever need it) I would have to save money for over 20 years, skipping on every life luxury for them and myself. It would be a bizarre life in which we would live like ascetic monks who spend most of their lives preparing to die and achieve Afterlife. Uncannily like religion.

If, aside from paying for cryo for my kids, I also wanted to pay for my own, my  SO's, and my parents, my brother etc, I would need to be effectively immortal just to put in enough work-hours.

Cryo might end up being the absolute pinnacle of elitist technology, because if you are not rich and Western enough, you are unlikely to ever afford it, and thus, destined to not only die, but watch your loved ones die as well while average Middle Class people from US or western EU would just chuck their sick loved ones into a freezer with a near certainty of their eventual survival and health.


The religions had it all wrong. In order to achieve Immortality in the After-life, you do not need to be good, or without sin, or pious, you just need to be able so save around 30-80k. If you can't, well, sucks to be you. Should have thought of it before you decided to be born poor.

Comment by Going Durden (going-durden) on Rationality !== Winning · 2023-07-25T09:11:16.224Z · LW · GW

One thing I don't see explored enough, and which could possibly bridge the gap between Rationality and Winning, is Rationality for Dummies.

Rationalist community is oversaturated with academic nerds, borderline geniuses, actual geniuses, and STEM people who's intellectual level and knowledge base is borderline transhuman.

In order for Rationality and Winning to be reconciled with minimum loss, we need a bare-bones, simplified, kindergarten-level Rationality lessons based on the simplest, most relatable real life examples. We need Rationality for Dummies. We need Explain Methods Like Im Five, that would actually work for actual 5-year olds.

True, Objective Rationality Methods should be applicable whether you are an AI researcher with a phd, or someone to young/stupid to tie their own shoes. Sufficiently advanced knowledge and IQ can just brute-force winning solutions despite irrationality. it would be more enlightening if we equipped a child/village idiot with simple Methods and judge their successes on this metric alone, while they lack intellectual capacity or theoretical knowledge, and thus need to achieve winning by a step-by-step application of the Methods, rather than jumps of intuition resulting from unconscious knowledge and cranial processing power.

Only once we have solid Methods of Rationality that we can teach to kids from toddler-age, and expand on them until they are Rational Adults, then we can say for certain which Rationalist ideas lead to Winning and which do not.

Comment by Going Durden (going-durden) on Micro Habits that Improve One’s Day · 2023-07-03T10:26:25.845Z · LW · GW

One of the main ways I managed to instill good habits in myself is to both use optimal paths to good habits, and closing optimal paths to sub-optimal habits. The trick is to make a good habit easier than it is annoying, and a bad habit more annoying than it is preferable.

Examples:

Hydration - I simply place a 2l water bottle by the apartment door every evening. It becomes impossible for me to leave the house without picking it up, and once it is in my hand, Im so much more likely to drink from it and take it with me than forget.

Exercise: I bought dumbbells to work out with, but consciously made no place to put them. I just place them on my gaming chair, so it becomes impossible to use the PC  without lifting the dumbbells. But the moment they are literally in my hands, it is easier to just pump a few curls than not.

Exercise/commute: I'm trying to unlearn driving everywhere, and bike whenever I can. I just place my car keys in my bike's frame pouch. This way I cannot leave the house without touching my bike, and once I do, its easier to just hop on it and ride away.

Diet: I always struggled with weight, and the one "simple trick" that actually worked for me was brushing my teeth ASAP after dinner. Since my teeth are already brushed, and it would be annoying to do so again, Im much less likely to snack after dinner. If the urge to snack is really strong, I just use some mouthwash, which not only makes me even more disinclined to soil my super-clean teeth, but no snacks taste good when my mouth is super minty/mentholly.

Waking early: the path to a sub-optimal habit is to hit snooze on the alarm and go back to sleep. Breaking the habit was as easy as placing the alarm clock in the bathroom, so I would have to walk across the entire house to turn it off, and once I do, Im already where I need to be to brush my teeth and shave, so might as well do so.


They reason why these are working is that all those habits are relatively weak, and a small tweak to how annoying would they be, means all the difference. Its basically weaponizing my own laziness/procrastination against itself. The goal is to make myself spend extra energy walking around and looking for things needed for my bad habits, and the things needed for the good habits to be always in my path.

Comment by Going Durden (going-durden) on 60+ Possible Futures · 2023-06-27T07:40:49.937Z · LW · GW

My take on some of the items on this list:

Lack of Intelligence: Very likely
Slow take-off AI: Very Likely
Self-Supervised Learning AI: Likely 
Bounded Intelligence AI: Likely
Far far away AI: Likely
Personal Assistant AI: close to 100% certain.
Oracle AI: Likely
Sandboxed Virtual World AI: likely
The Age of Em: Borderline Certain
Multipolar Cohabition: borderline certain
Neuralink AI: borderline certain
Human Simulation AI: likely
Virtual zoo-keeper AI:  likely
Coherent Extrapolated Volition AI: likely
Partly aligned AI: Very likely
Transparent Corrigible AI: Borderline certain.


In total, I think the most probable scenario is a very, very slow take-off, not a Singularity, because AGI would be hampered by Lack of Intelligence, slowed down by countless corrections, sandboxing and ubiquity of LAI. In effect, by the time we have something approaching true AGI, we would long be a culture of cyborgs and LAIs, and the arrival of AGI will be less of a Singularity, but a fuzzy pinnacle of a long, hard, bumpy and mostly uneventful process.

In fact, I would claim that we will never be at a point where we can agree: "yep, AGI is finally achieved." I rather envision us tinkering with AI, making in painstakingly more powerful and efficient, with tiny incremental steps, until we are content that it is "eh, this Artificial Intelligence is General enough, I guess."


In my view, the true danger does not come from achieving AGI and it turning on us, but rather achieving stupid, buggy yet powerful LAI, giving it too much access, and having it do something that triggers a global catastrophe by accident, not out of conscious malice.

Its less "Superhuman Intelligence got access to the nuclear codes and decided to wipe us out" but, "Dumb as a brick LAI got access to the nuclear codes and wiped us out due to a simple coding error".

Comment by Going Durden (going-durden) on Confused Attractiveness · 2023-06-27T07:10:07.303Z · LW · GW

One problem I see with your insect alien example, which also, in a much greater way, influences human attractiveness, is that there are not just four, or five, or a dozen of physical attractiveness factors, but hundreds of them. And each of these factors influences other factors in different ways, for example:

  • height on a man is considered attractive
  • low body fat on a man is considered attractive, but;
  • a combination of too much height and too little body fat would be unattractive.

My take is there are hundreds, even thousands of traits that fall under "Flawlessness" but they play very weirdly against each other, and thus Appeal is born; a personal subconscious opinion on what sets of traits one likes most.

What is also missing from your analysis, is Beauty-Appeal Vs Sex-Appeal. Some traits trigger our aesthetic appreciation, and some trigger our raw sexual appetite, and not only are these not the same traits, but sometimes opposite ones.

I would define Sex-Appeal as a set of traits, physical and behavioral, that make the person seem:

  • relatively easy to seduce (for me), also known as DTF (down to fuck)
  • suggesting they would be good at sex
  • suggesting their body would feel nice to touch
  • vaguely related to strong Secondary Sexual Characteristics

Meanwhile, Beauty-Appeal are sets of purely aesthetic Flawlessness traits, that do not correspond to the above points at all, but show symmetry, golden ratio, aesthetically striking color palette etc. The make a person a perfect model, someone you would love to take pictures of, paint or draw, rather than get raunchy with.

I would even take it further, many of the Beauty-Appeal traits take away from Sex-Appeal, because some of them are signifiers of innocence, youth, or vaguely stand-offish perfection, that make the person seem like they would not be DTF. We subconsciously disengage from thoughts about having sex with such a person, regardless whether or not these traits truly signify their DTF.
 Some examples:

Melodic, high female voice: beauty

raspy, low pitched female voice: sexy

Flawless skin: beauty

Tattoos and "cool" scars: sexy

hairless male chest: beauty

hirsute male chest: sexy

perfectly sized medium breasts: beauty

oversized breast: sexy

 

Comment by Going Durden (going-durden) on "Natural is better" is a valuable heuristic · 2023-06-21T14:02:02.401Z · LW · GW

what Im getting at, is that while the evidence for oldest agriculture is from around 12k-10k, this is not the same as saying that your particular ancestors come from a line that used agriculture for solid 10k years straight (unless you are from very specific Anatolian or Iraq genetic lines).

It could easily be the case that your ancestors had been eating grain and dairy for 500 generations, or maybe just 10 generations or less. 

One example of what Im talking about is lactose tolerance which allows one to consume dairy. It is a mutation that is only roughly 8k years old, and thats only if you are of Anatolian/Turkish ancestry.

Another would be protein madness, which rarely happens among Sub-Polar people, but affects Europeans who moved North.  

Similarly, our genetic predisposition towards certain reactions to gluten, high-protein diet, high fructose diet, even alcohol vary wildly. 

In most cases, when we think of "modern" diet and lifestyle, we are basically thinking of the industrialized, grain and dairy Anglo-Saxon diet and a life of small caloric surplus over a relatively modest caloric expenditure. Which affects you different if you indeed are of Anglo-Saxon ancestry, and your ancestors had been eating cheese and bread for at least 6k years, while slowly reducing the amount of labor needed to create it.


Its going to hit you differently if your ancestors were Sub-Polar peoples who subsisted on high-fat/zero carb diet, or came from a tropical jungle where they subsisted on high-sugar fruit, low fat meat and minimum labor to procure it.

Comment by Going Durden (going-durden) on "Natural is better" is a valuable heuristic · 2023-06-21T12:36:06.486Z · LW · GW

How similar is your life to that of a homo sapiens from 12,000 years ago? If you made it more similar, would that help you?
 


Why pick that arbitrary point in our evolution? My ancestors 12k years ago could have been subsistence farmers who toiled all day but ate a lot of calories. Could be cold climate hunter-gatherers who fasted intermittently between giant feasts, and burned most of these calories to zero, trying to secure a next big kill. Could have been tropical climate hunter-gatherers who did light hunting and gathering 2-3 hours a day, ate small meals, and played lazily all day.
And this only takes into account the ancestors from exactly 12k years ago. What about ancestors from 6k years ago? What about 200 k years ago?
To make matters more complex, different ancestries would call for different lifestyle and diet. Our natural metabolism, lactose tolerance, muscularity, fat % and countless other factors vary wildly between ancestries. A lifestyle/diet fit for a descendant of the Innuit would not be fit for a descenant of the X!hosa and vice-versa.

Human evolution is an ongoing process that takes different populations into wildly different directions, so it is not obvious what is the "natural environment" for each human, unless they are literally living a stone-age life right now, in absolute genetic and technological isolation.
 

Comment by Going Durden (going-durden) on Guide to rationalist interior decorating · 2023-06-20T11:58:44.090Z · LW · GW

One simple trick that I applied to my apartment lately, is to break with the tradition of "proper" placement of various objects, furniture and doodads, but focus on pure functionality and natural paths that come from human laziness.

Examples:

  • beverage cooler right next to the couch, NOT in the kitchen. After all, I drink beer on a couch, not in front of the sink like a madman. Same goes for the bottle opener, corkscrew etc.
  • TV set is high up on the wall, almost at ceiling level. Since I watch TV/Netflix reclined on the couch, it makes no sense to place it on "eye level" since my eyes point upward not forward.
  • wall clock in the bathroom, right over the mirror. Most people who bother having a wall clock keep in in the living room, but that makes little sense. The most likely situation when you need to look at the clock is when you are preparing to leave the house: while getting dressed, brushing teeth etc.
  • the closet with "in house" sweats, pajamas etc is in the bathroom, right next to the bathtub/shower, so I can dress myself immediately in fresh clothes right after washing, and not streak naked around the house looking for pajama bottoms.
  • "poop library". A trick well known to boomer, but largely forgotten, is to have a pile of do-eared, cheap, redundant books on a shelf right next to the toilet, when you need something to read waiting for Number 2, and do not want to spread icky on your phone.
  • storage poufs. Its basically a storage box with a pillow on top, that you can use both as a chair and to keep stuff in. If you buy poufs the same height as the seat of your couch, they also work as a perfect extension to stretch your legs. Keeps all the clutter you need in my "couch space" at hand.
  • door shelf. In my case its a flat rectangular bowl that I bolted to the door, and put stuff that I absolutely need to take with me when I leave the house. The shelf must be ON the door, not next to it, on eye level, so that it is impossible to open the door without seeing the contents of the door shelf, and you kinda have to take them with you, or the act of opening the door would spill the contents of the shelf all over the floor. 
Comment by Going Durden (going-durden) on I still think it's very unlikely we're observing alien aircraft · 2023-06-19T08:51:01.548Z · LW · GW

It is quite possible though that over time there are fewer and fewer BFs. They might be going extinct, even without much human interaction. As for finding bones, if the population is low, and their territory so big, it might take centuries. 

Comment by Going Durden (going-durden) on I still think it's very unlikely we're observing alien aircraft · 2023-06-16T10:20:19.633Z · LW · GW

I also noticed that there is an inverse cultural relationship between the belief in magic, witchcraft, spirits/fair folk etc and the belief in UFOs. Which makes me think aliens simply fill the Post-Enlightment gap in the legendarium for cultures that want to pretend they are "too reasonable" to believe in magic, but open to a belief in "sci fi" myths; ie: Fair Folk kidnapping folk - nah, Aliens kidnapping folk - yah.

Comment by Going Durden (going-durden) on I still think it's very unlikely we're observing alien aircraft · 2023-06-16T10:12:55.242Z · LW · GW

As for Bigfoot: while I don't believe it exists, I think Its wrong way to think of it as avoiding cameras. The more reasonable explanation is that cameras avoid the places where it could possibly live. Bigfoot, Sasquatch, Yeti, and similar Apemen are almost always reported to live in remote wilderness, and specifically the North of USA, Canada, Russia, China, and of course the Himalayas. It seems like we should be able to spot them, until you realize that the northern wilderness belt that stretches from Alaska to Greenland, and then around Eurasia and back to Alaska is astonishingly big, and almost completely empty of humans. We are talking about a strip of wilderness that has about the same surface area as the Moon, and the possible population of Bigfeet would likely be smaller than the population of chimps in Africa. If every researcher interested in finding Bigfoot went to explore the Big North with all the state of the art equipment they could carry, and they spread evenly to cover maximum area, they would not only not find Bigfoot, but not find each other, due to enormous distances through impassable woodland and mountains. 

Comment by Going Durden (going-durden) on I still think it's very unlikely we're observing alien aircraft · 2023-06-16T09:58:23.247Z · LW · GW

I would even argue that Bigfoot being more bigfooty; a primitive yet sapient and inteligent hominid, perhabs some late descendant of the gigantopithecus, is more plausible than it being say a sloth, because it seems to make honest attempts to avoid humans. If it was a mere sloth, or an ape oof the same intellectual capacity as a chimp, it would be found far easier.

While existence of Bigoot is extremely unlikely, If it were real, I would rather assume they are a tribal species of essentially very hairy humans who avoid us the same way some Sentinel tribes do.

Comment by Going Durden (going-durden) on UFO Betting: Put Up or Shut Up · 2023-06-16T09:47:23.923Z · LW · GW

I would also take issue with the "mundane" part. What does that even mean? Any explanation that is good enough to cover all UFO cases with their myriad of physics-defying feats, is in itself a proof of supertechnology which should also be under the bet.


For example, an explanation that the supposed UFOs are really experimental military aircraft would simply mean that the military possesses technology that is effectively "magic" compared to the civilian aircraft technology. If you witness a flying object that can push Mach 10 effortlessly and takes instant turns without any inertia, does it matter if this is an alien craft or human military craft? It still should belong on the list.

Comment by Going Durden (going-durden) on UFO Betting: Put Up or Shut Up · 2023-06-16T09:39:10.870Z · LW · GW

Leftovers of an ancient civilization 

Archaeologist here: you'd want to really, really narrow down on what you mean here, otherwise we will clean your pockets pretty easily. Since about 2016, new discoveries of ancient civilizations, predating the most reasonable estimates crop up like mushrooms.

My estimate is that we will have several proofs pushing the the origins of civilization at least 10k years backwards, if not more, in the very near future, likely along the vectors of:
- Gobeli Tepe and other Turkish/Anatolian ruins being significantly older than we thought.
- The Sphinx and some of the Egyptian stuff being significantly older than we thought.
- ruins in Indonesia that have a good chance to be proven older than all of the above.
- Pacific Connection (Australian Aboriginal People and some Sth American tribes being related) being confirmed, thus pushing the colonization of America at least 12k years further back, via boat no less.
- evidence that copper, iron and tin were smelted significantly earlier than we assumed.

In other words; current established estimate is that civilization as we know it is at best around 12k years old, and did not really kick off for real until 6k BCE. But we keep finding evidence that pushes that back at least to 25k BCE. We also keep finding evidence that both neanderthals and denisovians split much earlier than we thought, were much more numerous, and survived longer than assumed, so it is completely possible that there was a proto-civilisation 20k years before Sumer even existed, and that, conceivably, you could have Neanderthal humans witness it (or possibly participate?)

Comment by Going Durden (going-durden) on What's the consensus on porn? · 2023-06-12T09:24:23.527Z · LW · GW

we know that involuntary sexual celibacy is psychologically harmful, and socially disruptive. If porn can damped the effects of involuntary celibacy and sexual frustration (which include, but are not limited to: rape, sexual harassment, social radicalization, and co-relates with acts of terrorism or public shootings etc )then it is almost certainly a net positive.

Comment by Going Durden (going-durden) on What's the consensus on porn? · 2023-06-12T09:18:57.336Z · LW · GW

One strong argument in favor of porn is that almost nobody alive gets as much sex as they actually want;  vast majority gets less than they want, minority gets too much, and without some kind of extreme social engineering this cannot be solved. 

Porn is the closest thing to a "bandaid solution" to that problem. Sexless or severely undersexed people can achieve an illusion of sex life with porn. Yes, porn is addictive and can an be psychologically harmful, but involuntary celibacy is definitely severely harmful, and we cannot solve it any other way.

Comment by Going Durden (going-durden) on What's the consensus on porn? · 2023-06-12T09:08:33.619Z · LW · GW

The underlying issue here is that the supply of sex, quality of sex, supply of quality partners and the logistics of all the above cannot meet the popular demand. It would require the number of highly libidinous attractive partners to be equal or exceeding the number of adults that desire sex. Until we somehow achieve Sexual Post-Scarcity (how? Sex-bots? VR sex? Massively orgiastic global swinger culture?) then porn is unavoidable.

Good sex with an attractive partner is an extremely scarce resource. In fact, any sex, even crappy one, is scarce, and far, far below popular demand. Porn is a necessary plug. It provides a better form of sexual release than pornless masturbation.

So in that regard, it is obvious that porn is more beneficial than harmful, since the alternative to porn for many is effectively celibacy, which has plenty of harmful psychological and social effects, including violence (sexual and otherwise).

 

Comment by Going Durden (going-durden) on AI Will Not Want to Self-Improve · 2023-05-17T12:34:37.997Z · LW · GW

Im confused by this post. It might be that I lack the necessary knowledge or reading apprehension, but the post seems to dance around the actual SELF-improvement (AI improving itself, Theseus Ship Style), and refocuses on improvement iteration (AI creating another AI). 


Consider a human example. In the last few years, I learned Rationalist and Mnemonic techniques to self-improve my thinking. I also fathered a child, raised it, and taught it basic rationalist and mnemonic tricks, making it an independent and only vaguely aligned agent potentially more powerful than I am. 

The post seems to focus on the latter option.

Comment by Going Durden (going-durden) on What does it take to ban a thing? · 2023-05-11T07:37:09.984Z · LW · GW

is if it turns out that advanced narrow-AIs manage to generate more utility than humans know what to do with initially.

 

I find it not just likely but borderline certain. Ubiquitous, explicitly below-human narrow AI has a tremendous potential that we act blind to, focusing on superhuman AI. Creating superhuman, self-improving AGI, while extremely dangerous, is also an extremely hard problem (in the same realm as dry nanotech or FTL travel). Meanwhile, creating brick-dumb but ubiquitous narrow AI and then mass producing it to saturation is easy. It could be done today, its just a matter of market forces and logistics.

It might very well be the case that once the number of narrow-AI systems, devices and drones passes certain threshold (say, it becomes as ubiquitous, cheap and accessible as cars, but not yes as much as smartphones) we would enter a weaker form of post-scarcity and have no need to create AI gods.

Comment by Going Durden (going-durden) on Luck based medicine: my resentful story of becoming a medical miracle · 2023-05-11T07:07:16.248Z · LW · GW

I have a somewhat similar story. I have been struggling with ADHD all my life, and only recently started using anti-ADHD medicines. Unfortunately, these gave me stomach issues and tremendous reflux, which was only tolerable if I took them in small doses...which in turn barely helped with my ADHD.

After testing pretty much every anti-ADHD drug with combination of every anti-reflux drug, I gave up, and tried my aunt's suggestion of Ashwaghandha. I was beyond skeptical, and only gave it a try to please a concerned relative. I was mentally prepared to anti-placebo it, determined to prove it will not work (I even pre-planned my smug and condescending speech about how I did my best to test it and how obviously it did nothing, being just another woowoo herbalist nonsense with no scientific proof behind it).

It goddamn worked. By itself, ashwaghandha did precisely nothing. By themselves, ADHD medicine did something, but at the cost of me belching acid like an overfed xenomorph. Combined, it resulted in a far, far greater mental focus, and no digestion issues at all. Absurdly, combining ashwghanda with smaller dose of amphetamine salts gave better and stabler mental results than just doubling the amph intake.

AFAIK, there are no studies that conclusively prove ashwaghanda really works. Those that do, suggest it as a sleeping aid of all things. And yet. I talked it out with my psychiatrist, and as far as she knows (and she is likely THE expert on adult ADHD in my country) ashwaghanda should do nothing at all.

 

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-05-08T06:38:54.189Z · LW · GW

It would be fascinating if propensity for limerence was genetically determined, because limerence directly influences our mating/breeding habits. For one, teen pregnancy might very well be a side effect of this.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-05-05T09:47:20.971Z · LW · GW

In that regard, should we assume that the missing component that makes love "romantic" or "limeric" is irrationality?

My instinct is that if someone has a gooey, excessive feeling that the other is Significant it counts as romantic, but if one had a rational, evidence based belief that the other is Significant, it would not be considered romantic enough, even if the feeling of emotional bond would be much more resilient in the second example.


To use a more concrete example:

1. Bill meets Alice and falls madly in love with her. He does irrational, excessively symbolic and juvenile things to impress her. They break up anyway after a turbulent 3 months. Their Love is Romantic.

2. Frank meets Jane on a professional dating app, and they see with perfect clarity that their values, ideologies, libidos, tastes and lifestyles are perfectly aligned. They marry and spent 57 years together in an easy bliss, until they die. Their relationship would not be qualified as romantic, even though it generated more happiness and a stronger bond.


Therefore, I would suggest that the important component of romance are: irrationality, excessiveness, emotional risk and playing against bad statistical odds. In other words, drama.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-05-05T09:35:01.508Z · LW · GW

I honestly cannot recall I ever felt limerence, even when I felt love. This led me to research it, and it seems like limerence is a highly culture specific, and is likely more a cultural meme than an emotion inherent to human brains. If I were to guess, I would say limerence is a side effect of the emotional and sexual frustration of the young and inexperienced humans who dabble in their first relationships, and since hearing/gossiping/reading about other people's romantic frustrations is exciting, it became a meme.

To support this theory, we see much greater emphasis on limerence/limerice/limerance in cultures and ages when virginity until marriage was considered sacred, and young people were gender-segregated. In free-love egalitarian cultures, we see remarkably little dramatic limerice, and in fact, we see attempts by the youth to artificially create romantic drama (ex: going out of their way to date dangerous people, or pining over an inaccessible celebrity) to achieve a semblance of limerice.


As we grew in numbers and social complexity, it is easy to encounter someone with a completely different desire/expectation of limerance in their life, which I think is the reason romantic relationships became so difficult.


Unsurprisingly, there seems to be very little desire for limerice among LWers and rationalists in general, which explains why a higher-than-average number of us are single, or dating fellow rationalists.

Comment by going-durden on [deleted post] 2023-05-05T09:19:05.102Z

Some reasons why Im personally not as involved in working to prevent AI Hell:

(in no order of importance).

1. Im not strongly convinced a hostile Singularity is plausible at least in the near future, from technological, logistical, and practical standpoint. Pretty much every AI Hell scenario I have read, hinges on sudden appearance of scientifically implausible technologies, and on instant perfect logistics that the AI could use.

2. Main issue that could lead to AI Hell is the misalignment of values between AI and humans. However, it is patently obvious that humans are not aligned with each other, with themselves or with rational logic. Therefore, I do not see a path to align AI with human values unless we Solve Ethics, which is an impossible task unless we completely redesign human brains from scratch.

3. Im personally not qualified to work on any technological aspects of preventing AI Hell. I am qualified to work on human-end Ethics and branch into alignment from that, and I see it as an impossible task with the kind of humans we get to work with.

4. A combination of points 1 and 2 leads me to believe that humanity is far more likely to abuse early stage AI to wipe itself out, than for AI itself to wipe out humanity of its own volition. To put it differently, crude sub-human level AI can plausibly be used to cause WW3 and a nuclear holocaust without any need for hostile superhuman AI. I think we worry too much about the unlikely but extremely lethal post-Singularity AI, and not enough about highly likely and just sufficiently lethal wargame systems in the hands of actual biological humans, who are not sufficiently concerned with humanity's survival.

5. Roko's Gremlin: anyone who is actively working on limiting or forcibly aligning AI is automatically on the hit-list of any sufficiently advanced hostile AI. Im not talking about long term high-end scenario of the Roko's Basilisk, but rather the near-future low-end situation in which an Internet savvy AI can ruin your life for being a potential threat to it. In fact, this scenario does not require actively hostile AI at all. I see it as completely plausible that a human being with a vested financial interest in AI advancement could plausibly use AI to create a powerful smear campaign against, say, EY, to destroy his credibility, and with him the credibility of the AI Safety movement. Currently accessible AI is excellent at creating plausible-seeming bullshit, which would be perfect to use for social media warfare against anyone who tries to monkeywrench its progression. Look at Nick Bostrom to see how easily one of us can be sniped down with minimum effort.

Comment by Going Durden (going-durden) on cyberpunk raccoons · 2023-04-28T09:49:26.205Z · LW · GW

What about the inverse idea, rather than putting cybernetics into racoons, put racoons into cynbernetics? 


Something like say, Boston Dynamics Dog, but with a racoon encased inside to pilot it. The actual strength, dexterity and speed of a racoon is unimpressive, the impressive part is their intelligence. You could have raccoons piloting all purpose robot-suits, but for simplest menial tasks you could get away with something long lived, robust and sturdy, like a lobster.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-04-28T08:43:10.337Z · LW · GW

Possibly, but to know that, I would have to be shown definition of what Romantic Love actually is, aside from "deep friendship+sex".  Even the Wikipedia article on Romance/Love lists a whole bunch of contradicting definitions that boil down to one of:


1. Friendship and sex (Emotional Bond+ Physical Bond).

2. Biological mate bonding to create offspring.

3. "...You know, that lyrical, limerical, ephemeral thing that we all experience, so we won't define it..."


My guess is that answer 3 is basically social memetics to cover and normalize that love is basically 2 by way of 1. 

And since asexual people supposedly feel Love as well, this means that Love is essentially an intense desire for Friendship that forms a lasting bond.

 

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-04-28T08:17:48.100Z · LW · GW

For a while now, I have been trying out something that I think would be compatible with your Portable Tell Culture, a thing I would call a Passive Tell or a Passive Frame. Basically, the idea is that my outward presentation and behavior is always well matched with my actual internal beliefs, and I consciously use social stereotypes and stylistic cues to make it obvious. 

Without getting into any specifics, I'm exactly the kind of a guy you would think I was after a first glance, and my words, actions, behavior, even fashion, matches the social stereotype that I internally resemble the most. Im exactly what it says on the tin, and a book that you can judge by the cover.

This came as a result of an experiment in radical honesty I started 2 years ago. Trying to limit lying, and deceptive presentation to a limit meant that I had to wear my internal beliefs openly and passively advertise them, which naturally filters the possible social interactions and types of people I interact with to those Im compatible with.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-04-28T07:59:59.348Z · LW · GW

another thing I consider common, is that a person who is overly flexible in changing their stance, and overly "fluent" in various social stances comes of as untrustworthy, suspicious, even dangerous. In the height of the PUA/NLP craze, these kind of people were called "social robots", and their behavior either made people fall for their charisma easily, or be extremely creeped out.

I think humans subconsciously expect some social stance misunderstandings, pushback from people with different stances, and that it would take at least some struggle to convince someone to match your stance. If the other person immediately shifts to a compatible stance, even one incongruous with their previous behavior, it catches us of-guard.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-04-28T07:51:10.666Z · LW · GW

looking at your components 1,2,3, I noticed that these are the same I would use when I signal "this interaction is on a timer" trying to communicate that the other person is kinda wasting my time, and they should be brief with their signals and move on. It is less "Im busy, please go away" but more "you have 90 seconds of my attention span, say your piece".

Maybe the stance with the opposite sex people we are not interested with is defined not just by intensity (or lack thereof) but timespan.

Comment by Going Durden (going-durden) on Romance, misunderstanding, social stances, and the human LLM · 2023-04-28T07:40:32.551Z · LW · GW

After reading your post, I realized that I see the term "romantic" as pretty void of meaning. 
Compare these 3:

1. Friendship (caring, vulnerability, bonding)

2. Friendship+sex (friends with benefits, caring, vulnerability, bonding and intercourse)

3. Romantic relationship (friendship + sex +...?)

I do not see anything obvious happening between options 2 and 3. "Romantic" does not seem to add any specific feelings or behaviors, with the possible exception of the expectation of monogamy, which in itself does not generate any new feelings if kept, only new feelings if NOT kept.

Therefore, I would weakly suggest we try to taboo "romantic" and "love", until we figure out what those terms actually mean, and how they differ from friendship in actual content not in social perception.

Thinking about that, made me follow down into the rabbit hole further. In society, we see a strong, if fuzzy defined structure of a "Couple" (romantic partners, Married etc). There seems to be an expectation that a Couple should be "Romantically In Love" but there seems to be no strong correlation between official Couple-status and romantic behaviors (however defined). There does not seem to even be a strong correlation between Couple status and friendship, or Couple Status and sex (plenty of Couples who are not really meaningfully friends, and plenty of couples with no sex life or below basic-needs sex life).

The cultural definition suggests that what makes a Couple ad Couple, regardless of the above, is the feeling of Romantic Love, but I don't see any workable definition of Romantic Love that does not simply combine Friendship+Sex, so I feel like we are running in circles.

It might be that these issues come from misunderstandings of social stances, but I feel like a part of it is that the Meme of Romantic Love is meta to actual one-on-one human relations, and instead is a top-down group belief, or possibly even a Belief-in-Belief,  that most of us conform to rather than actually believe in. It might not be that our expectations do not match each other, but that they try to match a cultural meme that has no actual physical reality behind it.

Part of the reason why I think that, is that historically, the definition of Romantic Love was quite weak. Every few centuries or so it would pop-up among the idle classes, and then fade away. Friendships and sexual attractions, are pretty well defined, and are either ingrained in our biology completely, or so culturally non-controversial, that you could talk about friendship with an Ancient Greek, or a Sentinel Islander, or an Inuit, and their definitions would match yours, but their definition of Romance could very well be alien to you, or non existent.

Comment by Going Durden (going-durden) on Stop trying to have "interesting" friends · 2023-04-28T06:40:36.569Z · LW · GW

my hunch is that we naturally segregate into monkeyspheres where certain definitions of interesting basically equal fun for everyone involved, and the Boring people are basically strangers, breaking the flow. Moreover, humans are not that different, we tend to be interested in similar things, and tend to be bored with similar things, at least broadly speaking.


What I think OP is trying to tell us, is that we should not over-focus on superficially fascinating snobs, who talk good game but aren't good friends, but I think most people actually know that; we treat our brief interaction with Superficially Interesting People the same way we treat chocolate, wine or weed: its fun to have a little every now and then, but we're not building our lifestyle around it.

Comment by Going Durden (going-durden) on grey goo is unlikely · 2023-04-28T06:31:28.957Z · LW · GW

we very likely did not, given the span of it, and various national responses.

Comment by Going Durden (going-durden) on Stop trying to have "interesting" friends · 2023-04-20T11:22:11.128Z · LW · GW

It comes dangerously close to conflating knowing a lot, reading a lot, or having thoughtful things to say with moral goodness.

 

Take note however, that life is generally short, and stakes of friendship are high. Interesting, knowledgeable personality is not equal with moral goodness, but it strongly correlates with at least, being more beneficial than harmful. It also strongly correlates with open-mindedness, and usually with empathy (since, part of empathy is being able to emulate the thought processes of another person within your mind, to guess their possible reactions, and that takes raw intelligence). 

Or to put it differently: Interesting Friends are more likely to be Good Friends than Bad Friends. Non-Interesting Friends are slightly more likely to be Bad Friends, or that you would be a Bad Friend to them out of boredom.


So, given that we are unlikely to be able to acquire dozens upon dozens of friends to test, or invest equally in everybody around us, it makes more sense to invest in Interesting Friends, then test them for things like conscientiousness, ethics and open-mindedness.


(Note: we should also consider neurodiversity issues. An Interesting Friend is significantly more valuable to a person with ADHD. A perfectly morally upright and open-minded Friend who is nevertheless Boring, would make an ADHD person claw their own brain out in frustration. TO put it differently, people's tolerance of Non-Interesting/Boring people range from "eh, he's alright" to "hanging out with them is a Cruel and Unusual Punishment".)

Comment by Going Durden (going-durden) on The basic reasons I expect AGI ruin · 2023-04-20T11:06:18.552Z · LW · GW

I notice I am confused by two assumptions about STEM-capable AGI and its ascent:


Assumption 1: The difficulty of self-improvement of an intelligent system is either linear, or if not, its less steep over time than its increase in capabilities. (counter scenario: an AI system achieves human level intelligence, then soon after intelligence 200% of an average human. Once it reaches say, 248% of human intelligence it hits an unforeseen roadblock because achieving 249% of human intelligence in any way is a Really Hard Problem, orders of magnitude beyond passing the 248% mark. )

Assumption 2: AI capabilities to self improve exceed its own complexity at all times. This is kind of a special case of Assumption 1. (counter scenario, complexity is either always, or at some point, greater than the capability, and it becomes an inescapable catch-22).

I guess that the hidden Assumption 0 for both is "every STEM problem is solvable in a realistic timeline, if you just throw enough intelligence at it." For my STEM-ignorant mind, it seems like some problems are either effectively unsolvable (ie: turning the entire universe into computronium and crunching until HDoTU won't crack it) or not solvable in the human-meaningful future (turning Jupiter into computronium and crunching for 13 million years is required) or, finally, borderline unsolvable due to catch-22 (inventing computronium is so complex you need a bucket of computronium to crunch it).


Can you lead me to understanding why I'm wrong?

Comment by Going Durden (going-durden) on Concentration of Force · 2023-04-20T08:07:07.539Z · LW · GW

I wonder, if there is a plausible way to memetically ruggedize the society against such concentrations of cultural force? I have some ideas, but none of them feel strong enough:

  • make concentration of social force taboo (Ganging-up is bad!) But I don't see how to achieve that without using CoF ourselves.
  • make "punish non-punishers" particularly taboo (How?)
  • encourage social contrarianism
  • encourage sealioning as a response to social CoF
  • preemptively ruggedize laws and contracts against future soc-CoF attacks
  • (risky) train social media algorithms to notice and flag CoF-like internet behavior

None of the above feel like they would be enough, but anything more powerful like that would introduce more problems than solutions.

Comment by Going Durden (going-durden) on Concentration of Force · 2023-04-20T07:43:48.108Z · LW · GW

One trick that worked for me in such a scenario, is to make refusal to exercise costly in terms of hassle and inconvenience.

For example, when I'm done exercising, I put my dumbbells on my gaming chair. Thus, it will be impossible for me to go sit and play the next day without actually lifting them again, and if I lift them again...its not that hard to just keep on lifting until I'm exhausted. 


Trick number two, was to ask my SO to remind me to work out every other day before sleep. If I refuse, to do so, I would have to face a minor embarrassment of having to explain why, to person who knows all my lies and self-lies. Moreover, my SO claimed sexual preference for a physically fit partner over a pudgy one, so I'm indirectly reminded that refusing to lift is detrimental to my sexual pleasure in the long run, while getting and keeping a sixpack has enormous and enthusiastically noticed benefits.


So in effect, the concentration of force here is a pile of small inconveniences for non-compliance, and small rewards for compliance, that themselves can be established with minimum effort.