Posts

Comments

Comment by rsaarelm on What's a good book for a technically-minded 11-year old? · 2024-11-06T07:26:52.789Z · LW · GW

Lewis Dartnell's The Knowledge - How to Rebuild Our World From Scratch is a sort of grand tour for technological underpinnings of industrial civilization and how you might bootstrap them. Might be a bit dry, but it's popular writing and if the kid's already reading encyclopedias it should fit right in. Lots of concrete details about specific technologies.

Might go for a left field option and see what he makes of Euclid's Elements.

Comment by rsaarelm on Bitter lessons about lucid dreaming · 2024-10-17T06:30:38.437Z · LW · GW

I haven't tried galantamine, but didn't find the drugless techniques all the same. The standard advice of keeping a dream diary and psyching yourself to have a lucid dream and to do reality checks never worked at all for me. Wake-back-to-bed on the other hand got me dozens of lucid dreams and often worked the first time I tried it after a break. It's also annoying to do because it involves messing with your sleep cycle and waking yourself up in the early morning, and it seems to always stop working if I try to do it multiple nights in a row.

Agree with the other parts though, the lucid dreams are generally pretty short, kind of samey. Maybe it takes a longer dream for the narrative to get properly weird, and the WBTB lucids are more often short dreams that start out of nowhere than becoming lucid midway through an involve dream. They're also too sporadic to get any sort of ongoing active imagination practice going since I don't have any routine of trying to WBTB once every week or something. There's Robert Waggoner's lucid dreaming book that talks more about possible ongoing psychological development you could make happen with repeated lucid dreams, as opposed to just the "hey, lucid dreams are a thing" books, but I guess a regular routine and some kind of intentful approach would help a lot here.

One thing I've been thinking is that the stories about shamanic journeys sound a whole lot like lucid dreaming, so maybe you could take a page from there. Try to travel to the underworld or overworld, meet some spirit entities, ask them what's up and maybe have a nice chat about large integer factorization.

Comment by rsaarelm on Building an Inexpensive, Aesthetic, Private Forum · 2024-09-11T06:17:28.127Z · LW · GW

Everyone who participates probably isn't a github-using programmer, but if they were, a stupid five-minute solution might be to just set up a private github project and use its issue tracker for forum threads.

Comment by rsaarelm on you should probably eat oatmeal sometimes · 2024-08-29T04:05:37.406Z · LW · GW

I had the same problem, then I started mixing cottage cheese in the oatmeal and that fixed it.

Comment by rsaarelm on It's time for a self-reproducing machine · 2024-08-08T06:26:02.827Z · LW · GW

Back when I read about people claiming a RepRap can reproduce itself, I felt like the claim implied it would build the electronics of the new RepRap from scratch as well and was confused since obviously a 3D printer can't double as a chip fab. The gold standard for a self-replicating machine for me is something like plants, which can turn high-entropy raw materials like soil and ores into itself given a source of energy. I guess you could talk about autotrophic self-reproducing machines that can do their thing given a barren planet and sunlight, and heterotrophic self-reproducing machines that have selling machined components over the internet and using the income to buy CPU chips and hire workers to assemble the skeleton of a new automated workshop as a valid strategy.

Comment by rsaarelm on Rationalist Purity Test · 2024-07-10T06:24:55.069Z · LW · GW

It's testing for conformity to folk values, not mythic values.

Comment by rsaarelm on Sci-Fi books micro-reviews · 2024-06-25T06:23:05.177Z · LW · GW

Great post. I've been trying to find SF reviews that aren't just blurbs to get an idea about what's going on with the scene currently. With the exception of Tchaikovsky, most authors whose names keep popping up seem to still be ones who started publishing back in the 20th century. Unfortunately, I already know about most of the books on this list. So I'm going to write a wishlist of books I've heard of but don't know that much about and would like to see reviews of,

  • Radix series by AA Attanasio
  • Starfishers series by Glen Cook
  • The Gap Cycle by Stephen R. Donaldson
  • David's Sling by Marc Stiegler
  • The Truth Machine by James Halperin
  • Appleseed by John Clute
  • Light and Nova Swing by M. John Harrison
  • Gridlinked by Neal Asher
  • The Quiet War by Paul McAuley
  • Silo series by Hugh Howey
  • Imperial Radch series by Ann Leckie
  • Southern Reach trilogy by Jeff VanderMeer
  • The Thing Itself by Adam Roberts
  • Too Like the Lightning by Ada Palmer
  • Crystal trilogy by Max Harms
  • Gnomon by Nick Harkaway
  • Stronger, Faster, and More Beautiful by Arwen Elys Dayton
  • This Is How You Lose the Time War by Max Gladstone
  • The Last Astronaut by David Welligton
  • A Memory Called Empire by Arkady Martine
  • XX by Rian Hughes
  • Termination Shock by Neal Stephenson
  • Virtua by Karl Olsberg
Comment by rsaarelm on Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles · 2024-03-03T10:03:18.099Z · LW · GW

James Gleick's Genius cites a transcript of "Address to Far Rockaway High School" from 1965 (or 1966 according to this from California Institute of Technology archives for Feynman talking about how he got a not-exceptionally-high 125 for his IQ score. Couldn't find an online version of the transcript anywhere with a quick search.

Comment by rsaarelm on How has internalising a post-AGI world affected your current choices? · 2024-02-11T13:34:40.257Z · LW · GW

I've stopped trying to make myself do things I don't want to do. Burned out at work, quit my job, became long-term unemployed. The world is going off-kilter, the horizons for comprehensible futures are shrinking, and I don't see any grand individual-scale quest to claw your way from the damned into the elect.

Comment by rsaarelm on An Invitation to Refrain from Downvoting Posts into Net-Negative Karma · 2024-01-31T11:47:55.120Z · LW · GW

How many users you can point to who started out making posts that regularly got downvoted to negative karma and later became good contributors? Or, alternatively, specific ideas that were initially only presented by users who got regularly downvoted that were later recognized as correct and valuable? My starting assumption is that it's basically wishful thinking that this would happen much under any community circumstances, people who write badly will mostly keep writing badly and people who end up writing outstanding stuff mostly start out writing better than average stuff.

Comment by rsaarelm on We shouldn't fear superintelligence because it already exists · 2024-01-08T09:04:34.489Z · LW · GW

On the exponentially self-improving part, No Evolutions for Corporations or Nanodevices.

Comment by rsaarelm on A Challenge to Effective Altruism's Premises · 2024-01-07T07:28:09.267Z · LW · GW

Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).

This post has terrible writing style, based on your posting history you've been here for a year, writing similarly badly styled posts, people have commented on the style, and you have neither engaged the comments nor tried to improve your writing style. Why shouldn't people just downvote and move on at this point?

Comment by rsaarelm on If Clarity Seems Like Death to Them · 2023-12-31T08:37:46.599Z · LW · GW

Is this your first time running into Zack's stuff? You sound like you're talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack's post is about his actual engagement with the community over a decade, we've seen a bunch of the previous engagement (in pretty much the register we see here so this doesn't look like an ongoing psychotic break), he's responsive to comments and his thesis generally makes sense. This isn't drive-by crackpottery and it's on LessWrong because it's about LessWrong.

Comment by rsaarelm on Has anyone here investigated the occult community? It is curious to me that many magicians consider themselves empiricists. · 2023-12-13T21:37:06.698Z · LW · GW

Record-keeping isn't enough to make you a scientist. People might be making careful records and then analyzing them badly, and if there's no actual effect going on selection effect will leave you with a community of misanalyzers.

Comment by rsaarelm on The Alignment Agenda THEY Don't Want You to Know About · 2023-11-30T08:29:02.573Z · LW · GW

The PDF is shown in full for me when I scroll down the academia.edu page, here's an archive.is capture in case this is some sort of intermittent A/B testing thing.

Comment by rsaarelm on [deleted post] 2023-11-13T18:21:21.555Z

There might not be, but it's not a thing in vacuum, it was coined with political intent and it's tangled with that intent.

Comment by rsaarelm on [deleted post] 2023-11-13T15:13:32.933Z

Blithely adopting a term that seems to have been coined just for the purposes of doing a smear job makes you look like either a useful idiot or an enemy agent.

Comment by rsaarelm on Focus on existential risk is a distraction from the real issues. A false fallacy · 2023-11-01T07:38:05.124Z · LW · GW

The post reads like a half-assed college essay where you're going through the motions of writing without things really coming together. Heavy on the structure, there's no clear thread of rhetoric progressing through it, and it's hard to get a clear sense where you're coming from with the whole thing. The overall impression is just list of disjointed arguments, essay over.

Comment by rsaarelm on "The Heart of Gaming is the Power Fantasy", and Cohabitive Games · 2023-10-10T14:11:36.763Z · LW · GW

I've been gaming some 35 years and I don't play any multiplayer games at all. I don't think I remember the ten or so people in my social hangouts who regularly talk about what they're playing talk much about PVP either, they seem to be playing single-player simulator, grand strategy and CRPG games or cooperative multiplayer games mostly.

Comment by rsaarelm on A short calculation about a Twitter poll · 2023-08-20T07:36:32.879Z · LW · GW

All else being equal, do you prefer to live in a society where many members are madmen and idiots or in a society where few members are madmen and idiots?

Comment by rsaarelm on Uploads are Impossible · 2023-05-13T07:15:01.448Z · LW · GW

"It can't happen and it would also be bad if it happened" seems to be a somewhat tempting way to argue these topics. When trying to convince an audience that thinks "it probably can happen and we want to make it happen in a way that gets it right", it seems much worse than sticking strictly to either "it can't happen" or "we don't know how to get it right for us if it happens". When you switch to talking about how it would be bad, you come off as scared and lying about the part where you assert it is impossible. It has the same feel as an 18th century theologian presenting a somewhat shaky proof for the existence of God and then reminding the audience that life in a godless world would be unbearably horrible, in the hope that this might make them less likely to start poking holes into the proof.

Comment by rsaarelm on Quote quiz: “drifting into dependence” · 2023-04-28T04:01:14.855Z · LW · GW

Ted Kaczynski

Comment by rsaarelm on Moderation notes re: recent Said/Duncan threads · 2023-04-25T04:49:26.642Z · LW · GW

This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?

Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.

Comment by rsaarelm on Killing Socrates · 2023-04-12T21:25:56.669Z · LW · GW

(That person is more responsible than any other single individual for Eliezer not being around much these days.)

Wait, the only thing I remember Said and Eliezer arguing about was Eliezer's glowfic. Eliezer dropped out of LW over an argument about how he was writing about tabletop RPG rules in his fanfiction?

Comment by rsaarelm on Don't take bad options away from people · 2023-03-27T09:55:47.882Z · LW · GW

There are already social security means-testing regimes that prod able-bodied applicants to apply for jobs and to spend their existing savings before granting them payments. If sex work and organ sales are fully normalized, these might get extended into denying social security payments until people have tried to support themselves by selling a kidney and doing sex work.

Comment by rsaarelm on When will computer programming become an unskilled job (if ever)? · 2023-03-17T08:52:58.218Z · LW · GW

The shift we're looking at is going from program code that's very close to a computer's inner workings to natural human language for specifying systems, but where the specification must still unambiguously describe the business interest the program needs to solve. We already have a profession for unambiguously specifying complex systems with multiple stakeholders and possibly complex interactions between its parts in natural language. It's called a legislator and it's very much not an unskilled job.

Comment by rsaarelm on Sazen · 2022-12-30T08:49:21.509Z · LW · GW

I understand esoteric as something that's often either fundamentally difficult to grasp (ie. an esoteric concept described in a short cryptic text might not be comprehensively explainable with a text five times longer that would be straightforward to write by anyone who understands the subject matter) or intentionally written in a way to keep it obscured from a cursory reading. The definition of hieratic doesn't really connote conceptual difficulty beyond mundane technical complexity or a particular intention to keep things hidden, just that writing can be made much more terse if you assume an audience that is already familiar with what it's talking about.

Comment by rsaarelm on Sazen · 2022-12-23T08:17:02.193Z · LW · GW

See also, esr's hieratic documentation.

Comment by rsaarelm on How to Take Over the Universe (in Three Easy Steps) · 2022-10-27T11:56:31.769Z · LW · GW

I'm somewhat confused why Nolan Funeral Home is one of the organizations you needed to contact about panspermia contagion, via some random person's memorial page. Is this some kind of spam program gone awry?

Comment by rsaarelm on Fullness to Indicate Cleanliness · 2022-10-11T13:56:01.072Z · LW · GW

Why not fill the detergent compartment immediately after emptying the dishwasher? Then you have closed detergent slot -> dirty dishes, open detergent slot -> clean dishes.

Comment by rsaarelm on Open & Welcome Thread - Oct 2022 · 2022-10-06T07:30:24.312Z · LW · GW

Have you run the numbers on these? For example

there are never two different subjects claiming to have been the same person

sounds like a case of the Birthday paradox. Assume there's order of magnitude 10^11 dead people since 8000 BCE. So if you have a test group of, say, 10 000 reincarnation claimants and all of them can have memories of any dead person, already claimed or not, what's the probability of you actually observing two of them claiming the same dead person?

The bit about the memories always being from dead people is a bit more plausible. We seem to have like 10 % of all people who ever lived alive right now, so assuming the memories are random and you can actually verify where they came from, you should see living people memories pretty fast.

Comment by rsaarelm on Open & Welcome Thread - Oct 2022 · 2022-10-06T07:08:33.724Z · LW · GW

But I’m curious now, is there a fairly sizable contingent of academic/​evidential dualists in the rationalist community?

It's more empirical than ideological for me. There are these pockets of "something's not clear here", where similar things keep being observed, don't line up with any current scientific explanation, and even people who don't seem obviously biased start going "hey, something's off here". There's the recent US Navy UFO sightings thing that nobody seems to know what to make of, there's Darryl Bem's 2011 ESP study that follows stuff by people like Dean Radin who seem to keep claiming the existence of a very specific sort of PSI effect. Damien Broderick's Outside the Gates of Science was an interesting overview of this stuff.

I don't think I've heard much of reincarnation research recently, but it was one of the three things Carl Sagan listed as having enough plausible-looking evidence for them that people should look a lot more carefully into them in The Demon-Haunted World in 1996, when the book was otherwise all about claims of the paranormal and religious miracles being bunk. I guess the annoying thing with reincarnation is that it's very hard to study rigorously if brains are basically black boxes. The research is postulating whole new physics, so things should be established with the same sort of mechanical rigor and elimination of degrees of freedom as existing physics is, and "you ask people to tell you stories and try to figure out if the story checks out but it's completely implausible for the person telling it to you to know it" is beyond terrible degrees-of-freedom-wise if you think of it like a physicist.

When you keep hearing about the same sort of weird stuff happening and don't seem to have a satisfying explanation for what's causing it, that makes it sound like there's maybe some things that ought to be poked with a stick there.

On the other hand, there's some outside view concerns. Whatever weird thing is going on seems to be either not really there after all, or significantly weirder than any resolved scientific phenomenon so far. Scientists took reports of PSI seriously in the early 20th century and got started trying to study them (Alan Turing was still going "yeah, human telepathy is totally a thing" in his Turing Test paper). What followed was a lot of smart people looking into the shiny new thing and accomplishing very little. Susan Blackmore spent decades studying parapsychology and ended up vocally disillusioned. Dean Radin seems to think that the PSI effect is verified, but it's so slight that "so go win the Randi Prize" doesn't make sense because the budget for a statistically conclusive experiment would be bigger than the prize money. And now we're in the middle of the replication crisis (which Radin mentions zero times in a book he published in 2018), and psychology experiments that report some very improbable phenomenon look a lot less plausible than they did 15 years ago.

The UFO stuff also seems to lead people into strange directions of thinking that something seems to be going on, but it doesn't seem to be possible for it to be physical spacecraft. Jacques Vallée ended up going hard on this path and pissed off the science-minded UFOlogists. More recently, Greg Cochran and Lesswrong's own James Miller talked about the Navy UFO reports and how the reported behavior doesn't seem to make sense for any physically real object on Miller's podcast (part 1, part 2).

So there's a problem with the poke things with a stick idea. A lot of smart people have tried, and have had very little progress in the 70 years since the consensus as reported by Alan Turing was that yeah this looks like it's totally a thing.

Comment by rsaarelm on Open & Welcome Thread - Oct 2022 · 2022-10-06T05:45:14.180Z · LW · GW

Any thoughts on Rupert Sheldrake? Complex memories showing up with no plausible causal path sounds a lot like his morphic resonance stuff.

Also, old thing from Ben Goertzel that might be relevant to your interests, Morphic Pilot Theory hypothesizes some sort of compression artifacts in quantum physics that can pop up as inexplicable paranormal knowledge.

Comment by rsaarelm on How and why to turn everything into audio · 2022-08-12T05:32:27.183Z · LW · GW

Still makes sense if you listen when walking or driving when you couldn't read a book anyway. I mostly listen to podcasts instead of audiobooks though, a book is a really long commitment compared to a podcast episode.

Comment by rsaarelm on How and why to turn everything into audio · 2022-08-12T05:29:49.949Z · LW · GW

Podcast transcription services probably. They seem to cost around $1 per minute nowadays. I expect they'll keep getting disrupted by AI. There's already audio transcription AIs like the autogenerated subtitles on youtube, but they get context-dependent ambiguous words wrong. Seems like an obvious idea to plug them to a GPT style language model that can recognize the topic being talked about and uses that to pick an appropriate transcription for homonyms.

Comment by rsaarelm on Salvage Epistemology · 2022-05-07T11:56:37.418Z · LW · GW

You seem to be claiming that whatever does get discovered, which might be interpreted as proof of the spiritual in another climate, will get distorted to support the materialist paradigm. I'm not really sure how this would work in practice. We already have a something of a precommitment to what we expect something "supernatural" to look like, ontologically basic mental entities. So far the discoveries of science have been nothing like that, and if new scientific discoveries suddenly were, I find it very hard to imagine quite many people outside of the "priesthood" not sitting up and paying very close attention.

I don't really follow your arguments about what matter is and past scientist being wrong. Science improved and proved past scientists mistaken, that's the whole idea with science. Spiritualists have not improved much so far. And the question with matter isn't so much as what it is (what would an answer to this look like anyway?), but how matter acts, and science has done a remarkably good job at that part.

Comment by rsaarelm on Salvage Epistemology · 2022-05-06T06:33:33.600Z · LW · GW

Are people here mostly materialists?

Okay, since you seem interested in knowing why people are materialists. I think it's the history of science up until now. The history of science has basically been a constant build-up of materialism.

We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn't since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thrown about. Planets being driven by angels? Nope, just gravitation and inertia. Okay, so comets don't have comet spirits, but surely living things have spirits. Turns out no, molecular biology is a bit tricky, but it seems to still paint a (very small) rocks thrown about picture that convincingly gets you a living tree or a cat. Human minds looked unique until people started building computers. The same story is repeating again, people point human activities as proofs of the indomitable human spirit, then someone builds an AI to do it. Douglas Hofstadter was still predicting that mastering chess would have to involve encompassing the whole of human cognition in 1979 and had to eat crow in the introduction of the 20th anniversary edition of his book.

So to sum up, simple physics went from spiritual (Aristotle's "rocks want to go down, smoke wants to go up") to materialist, the outer space went from spiritual to materialist, biological life went from spiritual to materialist and mental acts like winning a chess game went from spiritual to materialist.

We're now down to the hard problem of consciousness, and we're also still missing a really comprehensive scientific picture for how you go from neurons to high-level human thought. So which way do you think this is going to go? A discovery that the spiritual world exists after all, and was hiding in the microtubules of the human brain all along, or people looking at the finished blueprint for how the brain works that explains things up to conscious thought and going "oh, so that's how it works" and it's all just rocks thrown about once again. So far we've got a perfect record of everybody clamoring for the first option and then things turning out to be the second one.

Comment by rsaarelm on I read Einstein's biography. Here are 15 quotes that reveal his philosophy on life. · 2022-03-17T07:47:42.141Z · LW · GW

OP might be some sort of content farming sockpuppet. No activity other than this post, and this was posted within a minute of a (now deleted) similarly vacuous post from a different account with no prior site activity as well.

Comment by rsaarelm on We're already in AI takeoff · 2022-03-10T10:25:28.543Z · LW · GW

In a Facebook post I argued that it’s fair to view these things as alive.

Just a note, unlike in the recent past, Facebook post links seem to now be completely hidden unless you are logged into Facebook when opening them, so they are basically broken as any sort of publicly viewable resource.

Comment by rsaarelm on [David Chapman] Resisting or embracing meta-rationality · 2022-03-02T08:56:20.141Z · LW · GW

You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there's some initial value in just trying to name things more precisely though, and painting a target of "we don't understand this region that has a name now nearly as well as we'd like" on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he's basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.

And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn't a question of charlatans or people lacking common sense. People really didn't seem to break rationality apart into the rule-following ("solve this quadratic equation") and pattern-recognition ("is that a dog?") parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.

So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?

Comment by rsaarelm on [David Chapman] Resisting or embracing meta-rationality · 2022-03-02T06:12:32.894Z · LW · GW

You really do have to gesture vaguely, and then say “GO DO THINGS YOU DON’T KNOW HOW TO DO”, and guide them to reflect on what they’re doing when they don’t know what they’re doing.

This is pretty much what I'm referring as the "mystery", it's not that it's fundamentally obscure, it's just that the expected contract of teaching of "I will tell you how to do what I expect you to do in clear language" breaks down at this point, and instead you would need to say "I've been giving you many examples that work backwards from a point where the problem has already been recognized and a suitable solution context has been recalled or invented. Your actual work is going to involve recognizing problems, matching them with solution contexts, and if you have an advanced position, coming up with new solution frameworks. I have given you very little actionable advice for doing this part because I don't know how to describe how it's done and neither do other teachers. I basically hope that your brain somehow picks up on how to do this part on its own after you have worked through many exercises, but it's entirely possible that it fails to do that for one reason or another and then you might be out of luck for being able to work in this field." I don't think I've ever seen a teacher actually spell things out like this, and this doesn't seem to be anything like a common knowledge thing.

Comment by rsaarelm on [David Chapman] Resisting or embracing meta-rationality · 2022-03-01T07:01:17.516Z · LW · GW

A fully meta-rational workplace is still sorta waffly about how the you actually accomplish the thing, but feels like an okay example of showing meta-rationality as "the thing you do when you come up with the rules, procedures and frameworks for (Chapman's) rational level at the point of facing undifferentiated reality without having any of those yet".

People have argued that this is still just rationality in the Lesswrong sense, but I think Chapman's on to something in that the rules, procedures and frameworks layer is very teachable and generally explicable, while the part where you first come up with new frameworks or spontaneously recognize existing frameworks as applying or not applying to a novel situation is also obviously necessary, but much more mysterious both in how you can teach it and exactly which mental motions you go through when doing it.

Comment by rsaarelm on Daniel Kokotajlo's Shortform · 2022-01-26T07:30:55.824Z · LW · GW

William Gibson and Idoru.

Comment by rsaarelm on Is AI Alignment a pseudoscience? · 2022-01-24T06:00:18.322Z · LW · GW

Hello new user mocny-chlapik who dropped in to tell us that talking about AGI is incoherent because of Popper, welcome to Less Wrong. Are you by chance friends with new user Hickey who dropped in a week ago to tell us that talking about AGI is incoherent because of Popper?

Comment by rsaarelm on How do you write original rationalist essays? · 2022-01-21T05:42:13.739Z · LW · GW

Also a good point, though this is maybe a different thing from the deliberate effort thing again. The whole concept of "be equal to the [top visible person] in [field of practice]" sounds like a weak warning signal to me if it's the main desire in your head. This sounds like a mimetic desire thing where [field of practice] might actually be irrelevant to whatever is ticking away in your head and the social ladder game is what's actually going on.

A healthier mindset might be "I really want to make concepts that confuse me clearer", "I have this really cool-seeming intuitive idea and I want to iron it out and see if it has legs" or just "I like putting words to paper", if you're looking at writing. Likewise for business, "I want to learn how to make things more efficient", "I want to create services that make people's lives better" or "I have this idea for a thing that I think would be awesome and nobody's making" are probably better than "I want to be the next Jeff Bezos".

If you have fun programming right now, how much do you care that John Carmack is better at it than you are?

Comment by rsaarelm on Viliam's Shortform · 2022-01-07T06:03:21.134Z · LW · GW

If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).

Wasn't Arbital pretty much supposed to be this?

Comment by rsaarelm on The Plan · 2021-12-28T07:44:12.936Z · LW · GW

It’s totally possible to think there’s a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).

If we believe there is a plain causal explanation, that rules out some explanations we could imagine. It shouldn't now be possible for humans to have been created by a supernatural agency (as was widely thought in Antiquity, the Middle Ages or Renaissance when most of the canon of philosophy was developed), and basic human functioning probably shouldn't involve processes wildly contrary to known physics (still believed by some smart people like Roger Penrose).

The other aspect is computational complexity. If we assume the causal explanation, we also get quantifiable limits for how much evolutionary work and complexity can have gone into humans. People are generally aware that there's a lot of it, and a lot less aware that it's quantifiably finite. The size of the human genome, which we can measure, creates one hard limit on how complex a human being can be. The limited amount of sensory information a human can pick up growing to adulthood and the limited amount of computation the human brain can do during that time creates another. Evolutionary theory also gives us a very interesting extra hint that everything you see in nature should be reachable by a very gradual ascent of slightly different forms, all of which need to be viable and competitive, all the way from the simplest chemical replicators. So that's another limit to the bin, whatever is going on with humans is probably not something that has to drop out of nowhere as a ball of intractable complexity, but can be reached by some series of small enough to be understandable improvements to a small enough to be understandable initial lifeform.

The entire sphere of complex but finite computational processes has been a blind spot for philosophy. Nobody really understood it until computers had become reasonably common. (Dennett talks about this in Darwin's Dangerous Idea when discussion Conway's Game of Life.) Actually figuring things out from the opaque blobs of computation like human DNA is another problem of course. If you want to have some fun, you can reach for Rice's theorem (basically following from Turing's halting problem) which shows that you can't logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.

So coming back to the problem,

If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this—see the various takes on rule-following problems developed by Wittgenstein). Terms like “information” “structure” “fitness” “processing” “innateness” and the like all are subject to this sort of dilemma if you really put them under scrutiny.

Okay, two thoughts about this. First, yes. This sounds like pretty much the inadequacy of mainstream philosophy argument that was being made on Lesswrong back in the Sequences days. The lack of satisfactory descriptions of human-level concepts that actually bottom down to reductive gears is real, but the inability to come up with the descriptions might be pretty much equivalent to the inability to write an understandable human-level AI architecture. That might be impossible, or it might be doable, but it doesn't seem like we'll find it out watching philosophers keep doing things with present-day philosopher toolkits. The people poking at the stuff are neuroscientists and computer scientists, and there's a new kind of looking a "mechanized" mind from the outside aspect to that work (see for instance the predictive coding stuff on the neuroscience side) that seems very foreign to how philosophy operates.

Second thing is, I read this and I'm asking "so, what's the actual problem we're trying to solve?" You seem to be talking from the point of general methodological unhappiness with philosophy, where the problem is something like "you want to do philosophy as it's always been done and you want it to get traction at the cutting edge of intellectual problems of the present day". Concrete problems might be "understand how humans came to be and how they are able to do all the complex human thinking stuff", which is a lot of neuroscience plus some evolutionary biology, "build a human-level artificial intelligence that will act in human interests no matter how powerful it is", which, well, the second part is looking pretty difficult so the ideal answer might be "don't", but the first part seems to be coming along with a whole lot of computer science and not having needed a lot of input from philosophy so far. "Help people understand their place in the world, themselves and find life satisfaction" is a different goal again, and something a lot of philosophy used to be about. Taking the high-level human concepts that we don't have satisfactory reductions for yet as granted could work fine at this level. But there seems to be a sense of philosophers becoming glorified talk therapists here, which doesn't really feel like a satisfactory answer either.

Comment by rsaarelm on The Plan · 2021-12-25T07:39:47.024Z · LW · GW

Reductionism is not just the claim that things are made out of parts. It’s a claim about explanation, and humans might not be smart enough to perform certainly reductions.

So basically the problem is that we haven't got the explanation yet and can't seem to find it with a philosopher's toolkit? People have figured out a lot of things (electromagnetism, quantum physics, airplanes, semiconductors, DNA, visual cortex neuroscience) by mucking with physical things while having very little idea of them beforehand by just being smart and thinking hard. Seems like figuring out human concepts grounding to physics has a similar blocker, we still don't have good enough neuroscience to do a simulation of how the brain goes from neurons to high-level thoughts (where you could observe a simulated brain-critter doing human-like things in a VR environment to tell you're getting somewhere even when you haven't reverse-engineered the semantics of the opaque processes yet). People having that kind of model to look at and trying to make sense of it could come up with all sorts of new unobvious useful concepts, just like people trying to figure out quantum mechanics came up with all sorts of new unobvious useful concepts.

But this doesn't sound like a fun project for professional philosophers, a research project like that would need many neuroscientists and computer scientists and not very many philosophers. So if philosophers show up, look at a project like that, and go "this is stupid and you are stupid, go read more philosophy", I'm not sure they're doing it out of purely dispassionate pursuit of wisdom.

Comment by rsaarelm on The Plan · 2021-12-24T16:44:42.902Z · LW · GW

I don't think I've seen the term "normative phenomena" before. So basically normative concepts are concepts in everyday language ("life", "health"), which get messy if you try to push them too hard? But what are normative phenomena then? We don't see or touch "life" or "health", we see something closer to the actual stuff going on in the world and then we come up with everyday word-concepts for it that sort of work until they don't.

It's not really helping in that I still have no real intuition about what you're going on about, and your AI critique seems to be aimed at something from 30 years ago instead of contemporary stuff like Omohundro's Basic AI Drives paper (you describe AIs as being "without the desire to evade death, nourish itself, and protect a physical body", the paper's point is that AGIs operating in the physical world would have exactly that) or the whole deep learning explosion with massive datasets of the last few years ("we under-estimate by many orders of magnitude the volume of inputs needed to shape our “models.”", right now people are in a race to feed ginormous input sets to deep learning systems and probably aren't stopping anytime soon).

Like, yeah. People can be really impressive, but unless you want to make an explicit case for the contrary, people here still think people are made of parts and there exists some way to go from a large cloud of hydrogen to people. If you think there's some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here? Right now you seem to be just giving some sort of smug shrug of someone who on one hand doesn't want to ask that question themselves because it's corrosive to dignified pre-Darwin liberal arts sensibilities, and on the other hand tries to hint at people genuinely interested in the question that it's a stupid question to ask and they should have read better scholarship to convince themselves of that.

Comment by rsaarelm on The Plan · 2021-12-24T09:39:43.445Z · LW · GW

What do you mean by "naturalize" as a verb? What is "naturalizing normativity"?

Some people think that this form of metaphysical naturalism is bedrock stuff; that if we don’t accept it, the theists win, blah blah blah, so we must naturalize mentality and agency, it must exist on a continuum, we just need a theory which shows us how. Other people think we can have a non-reductive naturalism which takes as primitive the normative concepts found in biology and psychology.

Does this amount to you thinking that humans are humans because of some influence from outside of fundamental physics, which computers and non-human animals don't share?