Posts

Aligned AI Needs Slack 2022-01-26T09:29:53.897Z
You can't understand human agency without understanding amoeba agency 2022-01-06T04:42:51.887Z
You are way more fallible than you think 2021-11-25T05:52:50.036Z
Nitric Oxide Spray... a cure for COVID19?? 2021-03-15T19:36:17.054Z
Uninformed Elevation of Trust 2020-12-28T08:18:07.357Z
Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely 2020-10-22T05:30:18.648Z
Mask wearing: do the opposite of what the CDC/WHO has been saying? 2020-04-02T22:10:31.126Z
Good News: the Containment Measures are Working 2020-03-17T05:49:12.516Z
(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z
A basic probability question 2019-08-23T07:13:10.995Z
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z
Religion as Goodhart 2019-07-08T00:38:36.852Z
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z
To understand, study edge cases 2019-03-02T21:18:41.198Z
How to notice being mind-hacked 2019-02-02T23:13:48.812Z
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z
Wirehead your Chickens 2018-06-20T05:49:29.344Z
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z

Comments

Comment by shminux on Aligned AI Needs Slack · 2022-01-26T22:04:19.302Z · LW · GW

I understand the low impact idea, and it's a great heuristic, but that's not quite what I am getting at. The impact may be high, but the space of acceptable outcomes should be broad enough so there is no temptation for the AGI to hide and deceive. A tool becoming an agent and destroying the world because it strives to perform the requested operation is more of a " keep it low-impact" domain, but to avoid tension with the optimization goal, the binding optimizations constraints should not be tight, which is what slack is. I guess it hints as the issues raised in https://en.wikipedia.org/wiki/Human_Compatible, just not the approach advocated there, "the AI's true objective remain uncertain, with the AI only approaching certainty about it as it gains more information about humans and the world".

Comment by shminux on Assuming MWI, under what decision theories are you obligated to do anything to the worlds you're aborted in? · 2022-01-24T05:25:57.777Z · LW · GW

Cosmologist Sean Carroll, the author of a highly recommended site and podcast is a self-described "mad-dog everettian" and a consummate Bayesian, repeatedly cautions against worrying about other branches that separated in the past when making decisions in the present. He is a really smart fellow and has spent a chunk of the last three years interviewing other really smart scientists of various disciplines, and is likely worth listening to, though not necessarily agreeing wholesale. He is also very much more pro-philosophy than an average physicist:

in exactly the areas of physics that I care about, in cosmology and in quantum gravity and fundamental physics, there are questions that philosophers understand at least as well, if not better than physicists do.

[...]

How do you reason anthropic-ally in large universes, what is the nature of the arrow of time and causality and locality, what are the issues and potential solutions to problems of the foundations of quantum mechanics or statistical mechanics? These are areas where philosophers have thought deeply about them and physicists tend to gloss over them, ’cause that’s what philosophers do, they think deeply. Now, they’re not that great at coming up with solutions, because they’re philosophers, they’re not scientists. It’s not their job to come up with solutions. The philosophers are really, really good at diagnosing the problems, they’re much better than the physicists are. Physicists are good at ignoring problems when they’re not absolutely necessary for them to be confronted.

But philosophers know that these problems are there, and they try to categorise what all the different possible approaches are, etcetera. And what would really be great is if physicists and philosophers really worked together at these intersection regions. And the truth is that they don’t. There’s a little bit of overlap. There’s some good counter examples to this wild generalisation I’m pushing right here, especially in Quantum Mechanics, that’s the one area of physics/philosophy overlap where there are really both physicists and philosophers who talk about it, but it’s not as if every physicist who does Quantum Mechanics has any idea of what the philosophers are talking about, and they could. I recently saw a paper that surveyed the opinions of physicists on the foundations of quantum mechanics, and it’s just embarrassing how little factually physicists know about the different approaches, whether it’s Copenhagen or Hidden Variables, or many-worlds or whatever, there’s just very, very basic mistakes that physicists make ’cause they don’t know what’s going on.

Some of them know they don’t know what’s going on, like there’s a substantial number of people who say, “I don’t know,” but there were too many people who thought they knew, and was wrong. And philosophers, likewise, philosophers have their own workshops, conferences, what have you, and don’t always talk to physicists. And so as a result of that, the questions that are within the domain of foundations of physics, like I mentioned, naturalness, entropic, etcetera, are not necessarily the ones that philosophers are training their keen minds upon, because they don’t move around as quickly as the physicists do, jumping from topic to topic. So they all have their favorite problems and they’ll think of them for a very long time. There’s plenty of room for improvement in the interaction between physicists and philosophers.

The decision-theoretical question you are asking it likely to be one of those, and, odds are, not caring about MW branches that split long ago is a very reasonable approach.

Comment by shminux on Is AI Alignment a pseudoscience? · 2022-01-23T21:24:03.786Z · LW · GW

I think your critique would be better understood were it more concrete. For example, if you write something like 

"In the paper X, authors claim that AI alignment requires the following set of assumptions {Y}, which they formalize using a set of axioms {Z}, used to prove a number of theorems {T}. However, the stated assumptions are not well motivated, because  [...] Furthermore, the transition from Y to Z is not unique, because of [a counterexample]. Even if the axioms Z are granted, the theorems do not follow without [additional unstated restrictions]. Given the above caveats, the main results of the paper, while mathematically sound and potentially novel, are unlikely to contribute to the intended goal of AI Alignment because [...]."

then it would be easier for the MIRI-adjacent AI Alignment community to engage with your argument.

Comment by shminux on A one-question Turing test for GPT-3 · 2022-01-22T22:04:28.510Z · LW · GW

If you asked a bunch of humans, would they make more sense than the AI?

Comment by shminux on Survey supports ‘long covid is bad’ hypothesis (very tentative) · 2022-01-14T19:29:08.915Z · LW · GW

This roughly matches the anecdotal evidence from my bubble. Something like 1 in 5 symptomatic cases get lingering symptoms that impair their ability to work and live, some fraction of that are not back to work after many months, and the long-term symptoms are almost an exact match for chronic fatigue/fibromyalgia. My hope is that, given the numbers of long covid, there will be some more research into what causes it, and it might end up benefiting those with CFS, who now mostly suffer and struggle in silence and isolation. 

Comment by shminux on No Abstraction Without a Goal · 2022-01-10T21:39:01.521Z · LW · GW

Abstraction is a compression algorithm for a computationally bounded agent, I don't see how it is related to a "goal", except insofar as a goal is just another abstraction, and they all have to work together for the agent to have a reasonable level of fidelity of the internal map of the world.

Comment by shminux on Is state a (semi-) rational agent? · 2022-01-10T21:01:02.470Z · LW · GW

Jody Azzouni wrote a bunch of stuff about it. He talked about whether countries are "real" in his recent podcast interview https://www.preposterousuniverse.com/podcast/2022/01/03/178-jody-azzouni-on-what-is-and-isnt-real/ (if you'd rather read, a transcript link is in there, as well).

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-09T03:05:50.549Z · LW · GW

That is, there isn't much to study in "abstract" agency, independent of the substrate it's implemented on

Yeah, that's the question, is agency substrate-independent or not, and if it is, does it help to pick a specific substrate, or would one make more progress by doing it more abstractly, or maybe both? 

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-07T08:42:05.883Z · LW · GW

I am having trouble understanding the "free energy principle" being anything more than a control system that tries to minimize prediction error. If that's all that is, there is nothing special about living systems, engineers have been building control systems for a long time. By that definition a Boston Dynamics walking robot is definitely a living system...

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-07T08:34:33.288Z · LW · GW

But such deflationary notions of agency seem deeply uncomfortable to a lot of people because they violate the very human-centric notion that lots of simple things don't have "real" agency because we understand their mechanism, whereas things with agency seem to be complex in a way that we can't easily understand how they work.

Yeah, that seems like a big part of it. I remember posting to that effect some years ago https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty

But given that we want to understand "real" agency, not some "mysterious agency" stemming from not understanding inner workings of some glorified thermostat, would it not make sense to start with something simple?

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-07T08:28:50.663Z · LW · GW

Right, something like that. A crow is smart though. That's why I went picked an example of a single-cell organism.

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-07T08:27:15.486Z · LW · GW

"you can't understand human intelligence without understanding amoeba intelligence"

That does sound less trivially true, I agree. I am not sure what the difference is exactly... 

nor does studying amoebas seem likely to be on the shortest path to AGI.

I don't see how this follows. Not studying amoebas, per se, but the basic blocks of intelligence starting somewhere around the level of an amoeba, whatever they might turn out to be.

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-06T10:16:54.698Z · LW · GW

It's a good point that there are trade-offs, and highly optimized programs, even if they perform a simple function, are hard to understand without "being inside" one. That's one reason I linked a post about an even simpler and well understood potentially "agentic" system, the Game of Life, though it focuses on a different angle, not "let's see what it takes to design a simple agent in this game". 

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-06T07:27:56.064Z · LW · GW

Compare: "You can't understand digital addition without understanding Mesopotamian clay token accounting".

Well, if we didn't understand digital addition and were only observing some strange electrical patterns on a mysterious blinking board, going back to the clay token accounting might not have been a bad idea. And we do not understand agency, so why not go back to basics?

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-06T07:23:21.216Z · LW · GW

Why not talk about the agency of electrons?

Indeed, why not? Where is the emergence threshold, or a zone? I would think this is where one would want to start understanding the concept of agency.

Comment by shminux on You can't understand human agency without understanding amoeba agency · 2022-01-06T06:19:29.486Z · LW · GW

Good point, introspection is a better term. 

Comment by shminux on Newcomb's Problem as an Iterated Prisoner's Dilemma · 2022-01-06T02:39:18.307Z · LW · GW

A converse statement has been discussed over 50 years ago https://www.jstor.org/stable/2265034

Comment by shminux on A Reaction to Wolfgang Schwarz's "On Functional Decision Theory" · 2022-01-05T10:04:18.212Z · LW · GW

Every discussion of decision theories that is not just "agents with max EV win", where EV is calculated as a sum of "probability of the outcome times the value of the outcome" ends up fighting the hypothetical, usually by yelling that in zero-probability worlds someone's pet DT does better than the competition. A trivial calculation shows that winning agents do not succumb to blackmail, stay silent in twin PD, one-box in all Newcomb's variants and procreate in the miserable existence case. I don't know if that's what FDT does, but hopefully what a naive max EV calculation suggests.

Comment by shminux on How do you turn an idea into a reality? (warning: involves politics) · 2022-01-05T01:41:19.908Z · LW · GW

The general approach that has been proven to work is "turn your idea into a potential money maker and be/find a person who can push it through to completion, like Musk/Jobs/Tiel".

Comment by shminux on The Plan · 2022-01-01T23:59:05.848Z · LW · GW

s an e-coli an agent? Does it have a world-model, and if so, what is it? Does it have a utility function, and if so, what is it? Does it have some other kind of “goal”

That's the part I find puzzling in terms of lack of time devoted to it: how can one talk about agency without figuring out the basics like that. Though I personally argued that it might not even be possible to do in this post, which conjectured that vapor bubbles"maximizing their volume" in a pot of boiling water are not qualitatively different from bacteria going against sugar gradient in search of food.

Comment by shminux on Help figuring out my sexuality? · 2021-12-22T23:04:33.788Z · LW · GW

Consider joining Fetlife and browsing around for something that piques your interest, and maybe finding a group of people who have matching needs.

Comment by shminux on Universal counterargument against “badness of death” is wrong · 2021-12-18T20:53:45.052Z · LW · GW

Interestingly, fewer (non-religious) people argue against a reframing of immortality as "eternal youth with a voluntary check-out option".

Comment by shminux on Leverage · 2021-12-15T09:09:38.681Z · LW · GW

Note that Trump achieved a lot of leverage in 2016 through the media, both con and prog, without ever explicitly using any of the above techniques. Unlike Jobs or Musk, who created a relevant brand, he relied on polarization (and performance) to get the publicity no capital can earn and no network can deliver.

Comment by shminux on We'll Always Have Crazy · 2021-12-15T09:02:41.229Z · LW · GW

Unless I am missing something, you are describing God of the gaps in a lot more words, plus some pop-evo-psych motivation for it.

Comment by shminux on Some abstract, non-technical reasons to be non-maximally-pessimistic about AI alignment · 2021-12-14T04:02:51.508Z · LW · GW

How many of those results are accepted as interesting and insighful outside MIRI?

Comment by shminux on You are way more fallible than you think · 2021-12-04T01:56:56.710Z · LW · GW

I agree that there are people who don't need this warning most of the time. Because they already double and triple check their estimates and are the first ones to admit to their fallibility. "Most of us" are habitually overconfident though. I also agree that the circumstances matter a lot, and some people in some circumstances can be accurate at 1% level, but most people in most circumstances aren't. I'm guessing that superforecasters would not even try to estimate anything at 1% level, realizing they cannot do it well enough. We are most fallible when we don't even realize we are calculating odds (there is a suitable HPMOR quote about that, too). Your example of giving a confidence interval or a range of probabilities is definitely an improvement over the usual Bayesian point estimates, but I don't see any easily accessible version of the Bayes formula for ranges, though admittedly I'm not looking hard enough. In general, thinking in terms of distributions, not point estimates, seems like it would be progress. Mathematicians and physicists do that already in a professional setting.

Comment by shminux on Morality is Scary · 2021-12-02T08:13:49.622Z · LW · GW

To repost my comment from a couple of weeks back, which seems to say roughly the same thing, not as well:

I don't believe alignment is possible. Humans are not aligned with other humans, and the only thing that prevents an immediate apocalypse is the lack of recursive self-improvement on short timescales. Certainly groups of humans happily destroy other groups of humans, and often destroy themselves in the process of maximizing something like the number of statues. Best we can hope for that whatever takes over the planet after meatbags are gone has some of the same goals that the more enlightened meatbags had, where "enlightened" is a very individual definition. Maybe it is a thriving and diverse Galactic civilization, maybe it is the word of God spread to the stars, maybe it is living quietly on this planet in harmony with the nature. There is no single or even shared vision of the future that can be described as "aligned" by most humans.

Comment by shminux on How do you write original rationalist essays? · 2021-12-01T08:37:47.210Z · LW · GW

But gun to my head - I can't seem to just sit down and make up an original non-fiction essay worthy of Less Wrong

With a gun to your head you would. It's amazing what the right motivation can do.

Comment by shminux on Seeking Truth Too Hard Can Keep You from Winning · 2021-11-30T05:52:45.218Z · LW · GW

As an arealist, I certainly can't disagree with your definition of truth, since it matches mine. In fact, I stated on occasion that tabooing true, say, by replacing with "accurate" where possible, is a very useful exercise.

The problem of criterion dissolves once you accept that you are an embedded agent with a low-fidelity model of the universe you are embedded in, including self. There is no circularity. Knowing how to know something is an occasionally useful step, but not essential for extracting predictions from the model of the universe, which is the agent's only action, sort of by definition. Truth is also an occasionally useful concept, but accuracy of predictions is what makes all the difference, including being able to model such parts of the world as other agents, with different world models. Knowledge is a bad term for accuracy of the model of the world, or as you said, "accurate predictions about our experiences". Accepting your place in the world as one of the multitude of embedded agents, with various internal models, who also try to (out)model you is probably one of the steps toward a more accurate model.

Comment by shminux on Question/Issue with the 5/10 Problem · 2021-11-30T03:22:53.094Z · LW · GW

From the link 

"I have to decide between $5 and $10. Suppose I decide to choose $5. I know that I'm a money-optimizer, so if I do this, $5 must be more money than $10, so this alternative is better. Therefore, I should choose $5."

To me there is a sleight of hand there. The statement "I know that I'm a money-optimizer" is not a mathematical statement, but an empirical one, it can be tested through one's actions. If you take $5 instead of $10, you are not a money-optimizer, even if you initially think you are, and that's something, as an agent, you can learn about yourself by observing your actions.

Comment by shminux on Why Study Physics? · 2021-11-29T05:29:41.961Z · LW · GW

I'm sure one can train this skill, to some degree at least. I don't think I got better at it, but I did use "the appropriate level of abstraction" to get the numerical part of my thesis done without needing a lot of compute, 

By the way, I agree that finding the appropriate level of abstraction is probably the core of what the OP describes.

Comment by shminux on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-11-28T21:31:16.418Z · LW · GW

The question for me is how much these observations apply to peasant life in other places and at other times. I’m hesitant to generalize, since this is the first book-length work of ethnography I’ve read in the context of this project, but for me it opens questions. Is cruelty towards animals and children, and an almost slave status for women, the norm?

The modern Western notions of classifying certain behaviors as cruelty, dishonesty, abuse and so on emerged from the life of surplus, when you could afford this luxury. Morals emerge from the need to survive, or are tailored to that need, so I expect that most societies at the level of poverty similar to that of Russian peasantry look roughly the same, even in the modern times. Should be easy to look up.

Comment by shminux on Is it better to fix a problem directly, or start again so the problem never happens? · 2021-11-28T20:17:21.139Z · LW · GW

Rewriting is hard, refactoring is easy and gets you 80% toward the goal that pushes one to rewrite. Also can be done incrementally.

Comment by shminux on Why Study Physics? · 2021-11-28T10:09:02.605Z · LW · GW

I think the title should be "why study physicists" not "why study physics". Because what you are describing is a gift certain physicists have, and others do not. I had it in high school (often when the teacher would state a problem in class, the only thing that was obvious from the beginning was the answer, not how to get to it), and it saved my bacon in grad school a few times many many years later. Recently it took my friend and me about 5 min of idle chatting to estimate the max feasible velocity of a centrifugal launch system, and where the bottlenecks might be (surprisingly, it is actually not the air resistance, and not the overheating, but the centrifugal g forces before launch which make single-stage-to-orbit impossible). John Wheeler famously said something like "only do a calculation once you know the result." Einstein knew what he wanted out of General Relativity almost from the start, and it took him years and years to make the math work. This pattern applies in general, as well. A physicist has a vague qualitative model that "feels right", and then finds the mathematical tools to make it work quantitatively, whether or not those tools are applied with any rigor. I don't know if this skill can be analyzed or taught, it seems more like artistic talent.

Comment by shminux on You are way more fallible than you think · 2021-11-25T09:22:23.710Z · LW · GW

It's a good question. If you ever do, say, project estimates at work, and look back at your track record, most of us would give 99% odds of completion to some project within a given time (well padded to make it that high), and still notice that we go over time and/or over budget way more often than that. There are exceptions, but in general we suck at taking into account long tails.

Comment by shminux on You are way more fallible than you think · 2021-11-25T09:19:38.525Z · LW · GW

That's another way to look at it. The usual implicit assumptions break down on the margins. Though, given the odds of this happening (once in a bush, at best, and the flame was not all that glorious), I would bet on hallucinations as a much likelier explanation. Happens to people quite often.

Comment by shminux on You are way more fallible than you think · 2021-11-25T09:16:59.540Z · LW · GW

Uh... because belief feels like truth from the inside, and so you cannot trust the inside view unless you are extremely well calibrated on tiny probabilities? So all you are left with is the outside view. If that is what you are asking.

Comment by shminux on [linkpost] Why Going to the Doctor Sucks (WaitButWhy) · 2021-11-23T06:11:00.569Z · LW · GW

Uncharitable summary: 

The Lanby is building a primary care utopia no one else thought of, even though it's obvious, with focus on prevention, quality care and free unicorns.

Comment by shminux on Morally underdefined situations can be deadly · 2021-11-22T15:52:33.780Z · LW · GW

Are you talking about bounded consequentialism, where you are hit with unknown unknows? Or about known consequences whose moral status evaluates to "undefined"?

Comment by shminux on Worst Commonsense Concepts? · 2021-11-15T20:05:37.675Z · LW · GW

I'd go further than "fact vs opinion" and claim that the whole concept of there being one truth out there somewhere is quite harmful, given that the best we can do is have models that heavily rely on personal priors and ways to collect data and adjust said models.

Comment by shminux on Why do you believe AI alignment is possible? · 2021-11-15T18:11:41.375Z · LW · GW

I don't believe alignment is possible. Humans are not aligned with other humans, and the only thing that prevents an immediate apocalypse is the lack of recursive self-improvement on short timescales. Certainly groups of humans happily destroy other groups of humans, and often destroy themselves in the process of maximizing something like the number of statues. Best we can hope for that whatever takes over the planet after meatbags are gone has some of the same goals that the more enlightened meatbags had, where "enlightened" is a very individual definition. Maybe it is a thriving and diverse Galactic civilization, maybe it is the word of God spread to the stars, maybe it is living quietly on this planet in harmony with the nature. There is no single or even shared vision of the future that can be described as "aligned" by most humans.

Comment by shminux on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T06:14:23.588Z · LW · GW

I don't see any glaring flaws in any of the items on the inside view, and, obviously, I would not be qualified to evaluate them, anyway. However, when I try to take an outside view on this, something doesn't add up.

 Specifically, it looks like anything that looks like a civilization should end up evolving, naturally or artificially, into an unsafe AGI most of the time, some version of Hanson's grabby aliens. We don't see anything like that, at least not in any detectable way. And so we hit the Fermi paradox, where an unremarkable backwater system is apparently the first one about to do so, many billions of years after the Big Bang. It is not outright impossible, but the odds do not match up with anything presented by Eliezer. Hanson's reason for why we don't see grabby aliens is < 1/10,000 "conversion rate" of "non-grabby to grabby transition": 

assuming a generous million year average duration for non-grabby civilizations, depressingly low transition chances p are needed to estimate that even one other one was ever active anywhere along our past lightcone (p <∼10−3) , has ever existed in our galaxy (p <∼10−4) , or is active now in our galaxy (p <∼10−7) . Such low chances p would bode badly for humanity’s future

However, an unaligned AGI that ends humanity ought to have a much higher chance of transition into grabbiness than that, so there is a contradiction between the predictions of unsafe AGI takeover and the lack of evidence of it happening in our past lightcone.

Comment by shminux on Improving on the Karma System · 2021-11-15T03:33:09.132Z · LW · GW

My gut feeling is that attracting more attention to a metric, no matter how good, will inevitably Goodhart it. The current karma system lives happily in the background, and people have not attempted to game it much since the days of Eugine_Nier. I am not sure what problem you are trying to solve, and whether your cure will not be worse than the disease.

Comment by shminux on AGI is at least as far away as Nuclear Fusion. · 2021-11-12T01:35:21.730Z · LW · GW

I am no MIRIan, but I see an obvious difference: nuclear fusion is easy and has been around for about 70 years , it's controlled nuclear fusion that is hard. By contrast, there is no needle one has to thread with AGI, once it's achieved, it is self-sustaining, and, arguably, is more like a nuclear bomb than like a nuclear reactor. So that's an argument that AGI is not as hard.

However, there is an opposite argument: self-sustaining long-lasting nuclear fusion has been around for 13.8 billion years, and spontaneously arises in nature, while AGI has never been observed in nature, as far as we know, and "intelligence" in general artificial or natural, has not been observed outside the surface of this planet.

Comment by shminux on Against the idea that physical limits are set in stone · 2021-11-11T22:26:09.144Z · LW · GW

the idea that physical limits are set in stone

is not what physicists believe. The actual statement is that under a wide variety of conditions the Core Theory is a very accurate description of the universe. And it precludes possibilities like FTL and time travel, among others. By the way, the FTL proposals you mentioned are not really FTL. For example the Alcubierre drive, contrary to popular views, does not enable one to travel arbitrarily far faster than light, only as far as the light propagates before the drive is "engaged" An eternal Alcubierre drive is another story, but it's not something you can control, it just is. Same thing with Krasnikov's tubes. All these, as well as traversable wormholes require negative energy sources and lead to various paradoxes. Additionally, there are theorems in general relativity that state that under a variety of conditions changing the topology of space is impossible without having singularities and/or closed timelike curves (the latter cannot be created, they are eternal). 

Some hope of further breakthroughs is at the interface of general relativity and quantum field theory, since we know they do not play well together, even in the low-energy limit, hence the black hole information paradox. 

The hope that such a breakthrough might lead to effectively FTL travel is quite dashed by the lack of astrophysical observations that would hint at anything happening superluminally, even though the energies that are achieved in many observed natural phenomena are very much higher than anything we can hope to reach in lab experiments. The main astrophysical unknowns, dark energy and dark matter, are not in any way superluminal. 

So, for anything like what you hope for (and we all hope for) would have to go beyond the core theory, and even further beyond the observed but yet unexplained macroscopic phenomena, which is a really tall order. 

Comment by shminux on Has LessWrong Been Mind-Killed on the Topic of God and Religion? · 2021-11-06T19:54:39.578Z · LW · GW

The other post is long and meandering, which only works if you are a good writer who expresses their point in koans or something. I couldn't even tell what your point was, or what your personal view on theism and religion is. Or why you are motivated to discuss belief in supernatural in a generally atheist crowd. Like, what are the windmills you are fighting?

Comment by shminux on Resurrecting all humans ever lived as a technical problem · 2021-10-31T21:32:58.744Z · LW · GW

Regarding collecting crumbs, Brandon Sanderson wrote a fantasy story about it: The Emperor's Soul.

Comment by shminux on Why the Problem of the Criterion Matters · 2021-10-31T05:32:04.889Z · LW · GW

I've always been confused on what people find confusing re the problem of the criterion. If I get it right, you can state it as "you must know how to assess truth in order to know how to assess truth", or something like that. My confusion about confusion lies in squaring that with the idea of embedded agency, where we are a part of partially predictable universe and contain an imperfect model of the universe. Therefore something like a criterion for assessing truth is inherent in the setup, otherwise we would not count as agents.

Comment by shminux on True Stories of Algorithmic Improvement · 2021-10-30T03:55:22.919Z · LW · GW

Another source of speedup is making the right approximations. Ages ago I coded a numerical simulation of a neuromusular synaptic transmission, tracking 50k separate molecules bumping into each other, including release, diffusion, uptake etc that ended up modeling the full process faithfully (as compared with using a PDE solver) after removing irrelevant parts that took compute time but did not affect the outcome.

Comment by shminux on Self-Integrity and the Drowning Child · 2021-10-25T03:46:34.191Z · LW · GW

Oops... I guess I misunderstood what you meant by "two pieces of yourself".

Anyway, I really like the part 

you failed to understand and notice a kind of outside assault on your internal integrity, you did not notice how this parable was setting up two pieces of yourself at odds, so that you could not be both at once, and arranging for one of them to hammer down the other in a way that would leave it feeling small and injured and unable to speak in its own defense

because it attends to the feelings and not just to the logic: "hammer down the other in a way that would leave it feeling small and injured".

I could have designed an adversarial lecture that would have driven everybody in this room halfway crazy - except for Keltham

I... would love to see one of those, unless you consider it an infohazard/Shiri's scissor.