Posts

Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely 2020-10-22T05:30:18.648Z
Mask wearing: do the opposite of what the CDC/WHO has been saying? 2020-04-02T22:10:31.126Z
Good News: the Containment Measures are Working 2020-03-17T05:49:12.516Z
(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z
A basic probability question 2019-08-23T07:13:10.995Z
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z
Religion as Goodhart 2019-07-08T00:38:36.852Z
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z
To understand, study edge cases 2019-03-02T21:18:41.198Z
How to notice being mind-hacked 2019-02-02T23:13:48.812Z
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z
Wirehead your Chickens 2018-06-20T05:49:29.344Z
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z
[Link] AI advances: computers can be almost as funny as people 2013-08-02T18:41:08.410Z
How would not having free will feel to you? 2013-06-20T20:51:33.213Z

Comments

Comment by shminux on Pain is not the unit of Effort · 2020-11-25T00:12:00.429Z · LW · GW

"If sex is pain in the butt, you are doing it wrong!" was a semi-humorous reminder from one of my former coworkers in situations like you describe.

Comment by shminux on (Pseudo) Mathematical Realism Bad? · 2020-11-22T22:16:05.481Z · LW · GW

Have you read Anglophysics?

Comment by shminux on Inner Alignment in Salt-Starved Rats · 2020-11-20T07:29:37.135Z · LW · GW

Well, you are clearly an expert here. And indeed bridging from the neurons to algorithms has been an open problem since forever. What I meant is, assuming you needed to code, say, an NPC in a game, you would code an "urge" certain way, probably in just a few dozen lines of code. Plus the underlying language, plus the compiler, plus the OS, plus the hardware, which is basically gates upon gates upon gates, all alike. There is no reinforcement learning there at all, and yet rows of nearly identical gates become an algorithm. Maybe some parts of the brain work like that, as well?

Comment by shminux on Inner Alignment in Salt-Starved Rats · 2020-11-20T00:35:47.878Z · LW · GW

a comparatively elaborate world-modeling infrastructure is already in place, having been hardcoded by the genome

is an obvious model, given that most of the brain is NOT necortex, but much more ancient structures. Somewhere inside there is an input to the nervous system, SALT_CONTENT_IN_BLOOD which gets translated into less graded and more binary "salt taste GOOD" or "salt taste BAD", and the "Need SALT" on/off urge. When the rat tastes the salt water from the tap for the first time, what gets recorded is not (just) "tastes good"  or "tastes bad" but "tastes SALTY", which is post-processed into a behavior based on whether the salty taste is good or bad. Together with the urge to seek salt when low, and the memory of the salty taste from before, this would explain the rats' behavior pretty well.

You don't need a fancy neural net and reinforcement learning here, the logic seems quite basic.

Comment by shminux on Notes on Honor · 2020-11-17T23:30:11.486Z · LW · GW

To me honor/conscience is an evolutionary adaptation to cooperate in iterated prisoner's dilemmas, plus whatever side effects/lost purposes that ended up piling up on top of that. Just like love is an outgrowth of evolutionary adaptation that improved the odds of genetic material propagation through strategies like (serial) monogamy. Of course, we wax poetic about both, forgetting that they are but a fluke of evolution.

Comment by shminux on Sunday November 15th, 12:00PM (PT) — talks by Abram Demski, Daniel Kokotajlo and (maybe) more! · 2020-11-14T20:05:05.414Z · LW · GW

The link is to itself and garden.lesswrong.com shows "closed"... 

Comment by shminux on Multiple Worlds, One Universal Wave Function · 2020-11-05T07:34:39.553Z · LW · GW

Strongly downvoted for basic misunderstanding of how science works (you test your theories! not wax poetic about them), alt-facts (the whole section on falsifiability is nonsense) and citing sus sources like hedweb. But MWI is applause light on this forum, so whatever.

Comment by shminux on What is the right phrase for "theoretical evidence"? · 2020-11-02T06:46:22.524Z · LW · GW

What you are describing is models, not observations. If you confuse the two, you end up with silly statements like "MWI is obviously correct".

Comment by shminux on The Born Rule is Time-Symmetric · 2020-11-02T02:26:52.775Z · LW · GW

I find the title extremely misleading. Projecting always loses information about the full state, so in that sense there no time symmetry, as the arrow of time is irreversible. 

Comment by shminux on Why I Prefer the Copenhagen Interpretation(s) · 2020-10-31T23:54:46.532Z · LW · GW

Every calculation requires a projection postulate, there is no way around it.

Comment by shminux on On the Dangers of Time Travel · 2020-10-28T04:14:03.301Z · LW · GW

it is illegal to buy plutonium in a drugstore and phone booths have been quietly disappeared

Gold.

Comment by shminux on What is our true life expectancy? · 2020-10-24T01:03:42.891Z · LW · GW

As usual, it pays to extrapolate from the past actuarial data, which should be available somewhere. My guess is that we are approaching saturation, barring extreme breakthroughs in longevity, which I personally find quite unlikely this century. I am also skeptical about AGI in 2075. If you asked experts in 1970 about Moon bases, they would expect it before 2000 with 90%+ confidence. And we already had the technology then. Instead the real progress was in a completely unexpected area. I suspect that there will be some breakthroughs in the next 50 years, but they will come as a surprise.

Comment by shminux on No Causation without Reification · 2020-10-23T21:19:52.643Z · LW · GW

My point is that the entirety of what we think we mean when we say  is ontological, i.e it's in the map. Not once ever has causality existed in the territory.

and 

This means that there's no aspect of the territory that is causality. There's no , there's no , there's no , there's just "is".

I'm glad I'm not the only one who finds this self-evident. The world indeed just is. You can think of it in timeless terms, or in evolving terms, but it doesn't change anything. You are a part of the world and so you just are, as well. There is no causality in you except for a physical process in your brain doing what feels like reification.

Comment by shminux on When was the term "AI alignment" coined? · 2020-10-22T01:15:10.814Z · LW · GW

Google advanced search sucks, but it's clear that AI friendliness and AI safety became AI alignment some time in 2016.

Comment by shminux on As a Washed Up Former Data Scientist and Machine Learning Researcher What Direction Should I Go In Now? · 2020-10-20T03:16:57.353Z · LW · GW

I am neither in ML nor in math nor in AI alignment, so just throwing it out there. From my reading of the issues facing the alignment research, it looks like the very basics of formalizing embedded agency are still lacking, but easier to make progress on than anything directly related to alignment proper.

Comment by shminux on PredictIt: Presidential Market is Increasingly Wrong · 2020-10-19T01:03:41.615Z · LW · GW

Something is only a sure bet if there is a way to Dutch-book it. Not clear to me if any of the bets you suggest work like that.

Comment by shminux on A tale from Communist China · 2020-10-18T22:07:18.525Z · LW · GW

Interesting how it mirrors the experience of my great grandparents after the Russian revolution through the 1920s and 1930s. It always starts with good intentions, but eventually deteriorates into the usual squabble for power and control, ideology superseded by envy and greed. I was also not told much about it until I was out of the situation where knowing too much would be dangerous. People never learn, of course. Wonder when and where will this pattern repeat (or has repeated).

Comment by shminux on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-16T08:10:18.195Z · LW · GW

If you start with the model of an embedded agent in a partially internally predictable world (it has to be at least partially internally predictable, otherwise embedded agency would not make sense), the rest falls out of that. If you define an embedded agent as a subsystem that has a course model of the world and a set of goals to optimize the world for, as well as a way to interact with the outside world, then "evidence" is just that interaction with the outside world, processed and incorporated into the map, and sometimes into the goals. So, the assumption "evidence exists" is grounded in the idea of embedded agency. 

If, on the other hand, you reject that approach in favor of another one, it pays to explicate your model of the world first. Is it solipsism? Cartesian dualism? Something else?

Comment by shminux on More Questions about Trees · 2020-10-10T00:21:02.269Z · LW · GW

From Scott Alexander: "Don't trust trees, they are seedy and shady"

Comment by shminux on Rationality and Climate Change · 2020-10-06T05:52:35.142Z · LW · GW

I suspect that climate change is both overhyped and underhyped. 

I expect that the current models underestimate the rate of change, and that the Arctic, permafrost, Greenland and eventually Antarctic will melt much sooner than projected, with the corresponding sea level rise. A lot of formerly livable places will stop being so, whether due to temperature extremes or ending up underwater.

That said, even the highest possible global warming will not exceed what happened 50 million years ago. And that time was actually one of the best for the diversity of life on Earth, and it could be again. What we have now is basically frozen leftovers of what once was. 

That said, the scale of the warming is unprecedented, and so a lot of wildlife will not be able to adapt, and will go extinct, only for the new varieties of species to take their habitats.

That said, humans will suffer from various calamities and from forced migration north into livable areas. There will be population pressures that will result in disappearance of the current Arctic states like Russia, Canada and Denmark's Greenland. And this will not happen without a fight, hopefully not a nuclear one, but who knows.

That said, there are plenty of potential technological ways to cool the planet down, and some may end up being implemented, whether unilaterally or consensually. This may happen as a short-term measure until other technologies are used to remove carbon dioxide from the atmosphere.

TL;DR: Climate change is a slow-moving disaster, but not an X-risk.

Comment by shminux on Are aircraft carriers super vulnerable in a modern war? · 2020-09-20T23:07:25.098Z · LW · GW

Not an expert, but they seem to be less useful in a conflict with a nuclear power that possesses something like this: https://en.wikipedia.org/wiki/Kh-47M2_Kinzhal

Comment by shminux on God in the Loop: How a Causal Loop Could Shape Existence · 2020-09-16T04:38:56.293Z · LW · GW
If you do travel back to the past though, you may find yourself travelling along a different timeline after that

No, that's not how it works. That's not how any of this works. If you are embedded in a CTC, there is no changing that. There is no escaping the groundhog day, or even realizing that you are stuck in one. You are not Bill Murray, you are an NPC.

And yes, our universe is definitely not a Godel universe in any way. The Godel universe is isotropic and stationary, while our universe is of the FRW-de Sitter type, the best we can tell.

More generally, knowledge about the system, or memory, as well as the ability to act upon it to rearrange information. In fact, if an agent has perfect knowledge of a system, it can rearrange it in any way it desires.

Indeed, but it would not be an embedded agent, but something from outside the Universe, at which point you might as well say "God/Simulator/AGI did it" and give up.

if we assume our universe is a causal loop, but it is not a CTC

That is incompatible with classical GR, the best I can glean. The philosophy paper is behind a paywall (boo!), and it's by a philosopher, not a physicist, apparently, so can be safely discounted (this attitude goes both ways, of course).

From that point on in your post, it looks like you are basically throwing **** against the wall and seeing what sticks, so I stopped trying to understand your logic.

Life doesn’t just veer off the rails into oblivion; it’s locked on a path, or lots of equivalent paths that are all destined to tell the same story — the same universal archetype. The loop cannot be broken, else it would have never existed. Life is bound to persist, bound to overcome, bound to exist again

To quote the classic movie, "Life, uh, finds a way". Which is a nice and warm sentiment, but nothing more.

But, if your goal is a search for God, then 10/10 for rationalization.

Comment by shminux on Egan's Theorem? · 2020-09-14T00:10:11.158Z · LW · GW
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well - i.e. most of the macroscopic world.

Well, that's false. The details of quantum to classical transition are very much an open problem. Something happens after the decoherence process removes the off-diagonal elements from the density matrix, and before only a single eigenvalue remains; the mysterious projection postulate. We have no idea at what scales it becomes important and in what way. The original goal was to explain new observations, definitely. But it was not "to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well".

Your other examples is more in line with what was going on, such as

for special and general relativity - they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work

That program worked out really well. But that is not a universal case by any means. Sometimes new models don't work in the old areas at all. The free will or the consciousness models do not reproduce physics or vice versa.

The way I understand the "it all adds up to normality" maxim (not a law or a theorem by any means), is that new models do not make your old models obsolete where the old models worked well, nothing more.

I have trouble understanding what you would want from what you dubbed the Egan's theorem. In one of the comment replies you suggested that the same set of observations could be modeled by two different models, and there should be a morphism between the two models, either directly or through a third model that is more "accurate" or "powerful" in some sense than the other two. If I knew enough category theory, I would probably be able to express it in terms of some commuting diagrams, but alas. But maybe I misunderstand your intent.

Comment by shminux on Gems from the Wiki: Acausal Trade · 2020-09-13T19:43:43.780Z · LW · GW

I was trying to understand the point of this, and it looks like it is summed up in

Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.

Isn't it your basic Max EV that is in the core of all decision theories and game theories? The "acausal" part is using the intentional stance for modeling the parts of the universe that are not directly observable, right?

Comment by shminux on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-12T06:02:18.325Z · LW · GW
I think the most plausible explanation is that scientists don't read the papers they cite

Indeed. Reading an abstract and skimming intro/discussion is as far as it goes in most cases. Sometimes it's just the title that is enough to trigger a citation. Often it's "reciting", copying the references from someone else's paper on the topic. My guess is that maybe 5% of references in a given paper have actually been read by the authors.

Comment by shminux on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-12T00:47:31.412Z · LW · GW

Is there a more standard terminology in psychology for this phenomenon? "Ugh field" feels LW-cultish.

Comment by shminux on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-11T05:49:39.881Z · LW · GW
So unless you are willing to commit that not only there is no reliable way to assign a prior, but also assigning a probability in this situation is invalid in itself

Indeed. If you have no way to assign a prior, probability is meaningless. And if you try, you end up with something as ridiculous as the Doomsday argument.

Comment by shminux on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-10T00:52:23.326Z · LW · GW

Note that speaking of probabilities only makes sense if you start with a probability distribution over outcomes.

In the firing squad setup we have an a priori probability distribution is something like 99% dead vs 1% alive without a collusion to miss, and probably the opposite with the collusion to miss. So the Bayesian update gives you high probability of collusion to miss. This matches the argument you presented here.

In the fine tuning argument we have no reliable way to create an a priori probability distribution. We don't know enough physics to even guess reliably. Maybe it's the uniform distribution of some "fundamental" constants. Maybe it's normal or log-normal. Maybe it's not even a distribution of the constants, but something completely different. Maybe it's Knightean. Maybe it's the intelligent designer/simulator. There is no hint from quantum mechanics, relativity, string theory, loop quantum gravity or any other source. There is only this one universe we observe, that's it. Thus we cannot use Bayesian updating to make any useful conclusions, whether about fine tuning or anything else. Whether this matches your argument, I am not clear.

Comment by shminux on The ethics of breeding to kill · 2020-09-07T21:24:07.471Z · LW · GW

If you talk to a real vegan, their ethical argument will likely be "do not create animals in order to kill and eat them later", period. Any discussion of the quality of life of the farm animal is rather secondary. This is your second argument, basically. The justification is not based on what the animals feel, or on their quality of life, but on what it means to be a moral human being, which is not a utilitarian approach at all. So, none of your utilitarian arguments are likely to have much effect on an ethical vegan. Note that rationalist utilitarian people here are not too far from that vegan, or at least that's my conclusion from the comments to my post Wirehead Your Chickens.

Comment by shminux on Why is Bayesianism important for rationality? · 2020-09-02T02:17:29.721Z · LW · GW

From https://en.wikipedia.org/wiki/Egregore#Contemporary_usage

a kind of group mind that is created when people consciously come together for a common purpose
Comment by shminux on Why is Bayesianism important for rationality? · 2020-09-01T06:54:31.663Z · LW · GW

(Not speaking for Eliezer, obviously.) "Carefully adjusting one's model of the world based on new observations" seems like the core idea behind Bayesianism in all its incarnations, and I'm not sure if there is much more to it than that. The stronger the evidence, the more signifiant the update, yada-yada. It seems important to rational thinking because we all tend to fall into the trap of either ignoring evidence we don't like or being overly gullible when something sounds impressive. Not that it helps a lot, way too many "rationalists" uncritically accept the local egregores and defend them like a religion. But the allegiance to an ingroup is emotionally stronger than logic, so we sometimes confuse rationality with rationalization. Still, relative to many other ingroups this one is not bad, so maybe Bayesianism does its thing.

Comment by shminux on nostalgebraist: Recursive Goodhart's Law · 2020-08-27T06:28:51.844Z · LW · GW

In this situation Goodhart is basically open-loop optimization. An EE analogy would be a high gain op amp with no feedback circuit. The result is predictable: you end up optimized out of the linear mode and into saturation.

You can't explicitly optimize for something you don't know. And you don't know what you really want. You might think you do, but, as usual, beware what you wish for. I don't know if an AI can form a reasonable terminal goal to optimize, but humans surely cannot. Given that some 90% of our brain/mind is not available to introspection, all we have to go by is the vague feeling of "this feels right" or "this is fishy but I cannot put my finger on why". That's why cautiously iterating with periodic feedback is so essential, and open-loop optimization is bound to get you to all the wrong places.

Comment by shminux on On Suddenly Not Being Able to Work · 2020-08-26T01:20:16.377Z · LW · GW

It looks like you've got an anxiety flareup every time you try to work. Anxiety is not necessarily presented as fast heartbeat, hyperventilation or any other easily measurable symptom. I have seen it aplenty in myself and others. Often the issue is not enough slack, but the way you describe it, it seems you have plenty, but maybe not the mental kind.

One approach that I have seen to help is to do a "15 min work". Not a pomodoro, though! Those imply lots of structured work and short breaks. Just... "I will write this code for 15 min" or "I will edit this post for 15 min", no further obligations, no pressure. If you stop after 15 min, it's still an accomplishment, if you decide to keep going for a time, that's fine, too. But stop when you get the same feeling again, and do something more fun. Once the internal pressure goes away, think when you can do another "15 min, no obligations past that" work.

Comment by shminux on Why don't countries, like companies, more often merge? · 2020-08-23T02:36:44.589Z · LW · GW

Tanzania

Comment by shminux on GPT-3, belief, and consistency · 2020-08-17T02:32:49.305Z · LW · GW
It's not quite the same, because if you're confused and you notice you're confused, you can ask.

You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3's beliefs are more belief-like than those of humans.

Comment by shminux on Many-worlds versus discrete knowledge · 2020-08-14T04:16:46.636Z · LW · GW
For this reason, I significantly prefer the Bohm interpretation over the many-worlds interpretation

Preferences do not make science. Philosophy, for sure.

Odds are, once mesoscopic quantum effects become accessible to experiment, we will find that none of the interpretational models reflect the observations well. I would put 10:1 odds that the energy difference of entangled states cannot exceed about one Planck mass, a few micrograms. Whether there is a collapse of some sort, hidden variables, superdeterminism, who knows.

Anyway, in general I find this approach peculiar, picking a model based on emotional reasoning like "I like indexicality", or "String theory is pretty". It certainly can serve as a guide of what to put one's efforts in as a promising research area, but it's not a matter of preference, the observations will be the real arbiter.

Comment by shminux on Fantasy-Forbidding Expert Opinion · 2020-08-10T15:22:37.053Z · LW · GW

Right, that makes sense. One reference class is "does not exist except in a fantasy" and the other "do not try it on yourself until there is reliable published research".

Comment by shminux on Fantasy-Forbidding Expert Opinion · 2020-08-10T07:20:29.042Z · LW · GW

Hmm, fairies and trolls are not at all like a "vitamin X". There are plenty of supplements that are known to have real positive effect in many cases. And we still know so little about human body and mind that there could be still plenty of low-hanging fruit waiting to be plucked. As for fairies and trolls, we know that these are artifacts of the human tendency to anthropomorphize everything, and there is not a single member of the reference class "not human but human-like in appearance and intelligence". We also understand enough of evolution to exclude, with high confidence, species like that. (Including humanoid aliens, whether in appearance or in a way of thinking.) But we cannot convincingly state that some extract of an exotic plant or animal from the depth of the rainforest or the ocean would not prove to have, say, a health boost on a human. The odds are not good, but immeasurably better than those of finding another intelligence, on this planet or elsewhere.

Comment by shminux on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-10T00:16:19.222Z · LW · GW

Ah, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below.

Regarding "subjunctive dependency" from the post linked in your other reply:

I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm).

The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively".

Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe.

Clearly your model is different from the above, since you seriously think about untestables and unaffectables.

Comment by shminux on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-09T19:03:30.377Z · LW · GW

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.

Comment by shminux on What is filling the hole left by religion? · 2020-08-08T05:39:35.142Z · LW · GW
They start asking why so how do I explain?

Then it's no longer pure one-shot, I agree. But there are plenty of cases where defection can be done with impunity, and personal conscience is all that is keeping one from it. I doesn't have to be financial, either.

Comment by shminux on What is filling the hole left by religion? · 2020-08-05T02:02:20.728Z · LW · GW
Just when are we really in a one-shot PD setting?

Any situation where the defection would not be uncovered, can be treated as one-shot PD. Theft, slacking off, you name it.

Comment by shminux on What is filling the hole left by religion? · 2020-08-04T15:56:51.538Z · LW · GW

Morality is what prevents social animals from defecting in the one-shot PDs. Society is what prevents social animals from defecting in iterated PDs. So morality is an evolutionary adaptation. In higher animals like primates and whales, where culture is essential, morality is largely learned (including by reading books). In lower animals, like dogs and bees, it is largely innate. The cultural behavioral norms are malleable and depend on the society, thus theft/violence/murder can be acceptable in some cases and prohibited in others, there are no absolutes.

Everything else, like religion, is just elaborate fluff on top.

Comment by shminux on [deleted post] 2020-08-01T01:06:26.545Z

People argue about the Newcomb's paradox because they implicitly bury the freedom of choice of both the agent and the predictor in various unstated assumptions. Counterfactual approach is a prime example of it. For a free-will-free treatment of Newcomb's and other decision theory puzzles, see my old post.

Comment by shminux on Would a halfway copied brain emulation be at risk of having different values/identity? · 2020-07-30T06:05:10.633Z · LW · GW

I'd think that brain is more like a hologram. Copying a small part would result in a dimmer and less resolved, but still a complete image. That said, I also don't see an ethical issue in copying inactive brain "trait-by-trait".

Comment by shminux on New Paper on Herd Immunity Thresholds · 2020-07-30T05:09:52.057Z · LW · GW

I thought that the data show that the immunity lasts maybe 2-3 months? If so, we will never get to 10%

Comment by shminux on What Failure Looks Like: Distilling the Discussion · 2020-07-30T05:03:15.117Z · LW · GW

Like others, I find "failure by enemy action" more akin to a malicious human actor injecting some extra misalignment into an AI whose inner workings are already poorly understood. Which is a failure mode worth worrying about, as well.

Comment by shminux on What a 20-year-lead in military tech might look like · 2020-07-29T21:14:52.715Z · LW · GW

I agree that we already have the makings of removing humans from the battlefield. Fighters, bombers, drones, all can be controlled remotely (especially once the starlink-like constellations are up) or semi-autonomously. General Dynamics-like robots can certainly be deployed as a substitute for boots on the ground, not too far down the road. And, given the unexpected advances in AI through GPT text/image generators, this tech becomes uncomfortably close to the Terminator-like scenarios.

Comment by shminux on Billionaire Economics · 2020-07-28T04:20:42.913Z · LW · GW

You can't do anything meaningful on the country-wide scale for $20bln. The US social security budget is over a trillion dollars, the highest budget expense, and it is not nearly enough. Your "progressive" (read ideologically left) friends are posting it for the feel-good ingroup applause lights without ever checking the sources. Not very much different from "covid is a Chinese/Liberal conspiracy".

Comment by shminux on A Natural Explanation of Nash Equilibria · 2020-07-26T21:53:20.580Z · LW · GW

Just a general comment that humans and most other social animals have evolved internal mechanisms for correcting the PD-type payoff matrix even in absence of an external threat of punishment for defection. We often call it "honor" or "conscience". It is not foolproof, but it tends to function well enough to prevent the ingroup collapse.