Posts

(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z · score: 25 (9 votes)
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z · score: 36 (13 votes)
A basic probability question 2019-08-23T07:13:10.995Z · score: 11 (2 votes)
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z · score: 31 (13 votes)
Religion as Goodhart 2019-07-08T00:38:36.852Z · score: 21 (8 votes)
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z · score: 6 (9 votes)
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z · score: 18 (6 votes)
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z · score: 23 (12 votes)
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z · score: 50 (23 votes)
To understand, study edge cases 2019-03-02T21:18:41.198Z · score: 27 (11 votes)
How to notice being mind-hacked 2019-02-02T23:13:48.812Z · score: 16 (8 votes)
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z · score: 5 (7 votes)
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z · score: 11 (3 votes)
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z · score: 12 (3 votes)
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z · score: 22 (8 votes)
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z · score: 30 (17 votes)
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z · score: 24 (12 votes)
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z · score: 47 (20 votes)
Wirehead your Chickens 2018-06-20T05:49:29.344Z · score: 72 (44 votes)
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z · score: 16 (5 votes)
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z · score: 28 (14 votes)
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z · score: 8 (9 votes)
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z · score: 16 (21 votes)
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z · score: 14 (18 votes)
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z · score: 21 (22 votes)
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z · score: 7 (8 votes)
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z · score: 10 (15 votes)
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z · score: -31 (42 votes)
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z · score: 19 (19 votes)
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z · score: 16 (17 votes)
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z · score: 6 (11 votes)
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z · score: 2 (4 votes)
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z · score: 37 (37 votes)
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z · score: 10 (12 votes)
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z · score: 20 (20 votes)
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z · score: 16 (16 votes)
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z · score: 39 (41 votes)
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z · score: 17 (17 votes)
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z · score: 9 (11 votes)
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z · score: 5 (9 votes)
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z · score: 8 (8 votes)
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z · score: 11 (13 votes)
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z · score: 2 (12 votes)
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z · score: 7 (17 votes)
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z · score: 25 (31 votes)
[Link] AI advances: computers can be almost as funny as people 2013-08-02T18:41:08.410Z · score: 7 (9 votes)
How would not having free will feel to you? 2013-06-20T20:51:33.213Z · score: 6 (14 votes)
Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" 2013-06-17T05:11:29.160Z · score: 18 (22 votes)
Applied art of rationality: Richard Feynman steelmanning his mother's concerns 2013-06-04T17:31:24.675Z · score: 8 (17 votes)
[LINK] SMBC on human and alien values 2013-05-29T15:14:45.362Z · score: 3 (10 votes)

Comments

Comment by shminux on The recent NeurIPS call for papers requires authors to include a statement about the potential broader impact of their work · 2020-02-24T09:07:11.579Z · score: 3 (5 votes) · LW · GW

Just like with renaming NIPS to NeurIPS, this is wokeness gone wild.

Comment by shminux on How do you survive in the humanities? · 2020-02-22T03:15:53.901Z · score: 3 (2 votes) · LW · GW

As Dagon said, learning empathy and humility is always a good idea. You don't have to believe your teacher or condone their views or practices, but that's a different issue.

Comment by shminux on How do you survive in the humanities? · 2020-02-21T04:03:14.399Z · score: -2 (11 votes) · LW · GW

Notice that your teachers are actually rational, if you define rationality as success in life. Believing or at least declaring to believe something you disagree with did not hinder their ability to get the job they want and teach the classes they want. They might not do as well in a science or engineering department, but that is not where they work.

You are stuck considering two options, missing a lot more of them. You think that they are wrong AND irresponsible AND harmful, period, and you can either try to fix it or to ignore it. Ironically, that is where your own failure lies: you can't even consider that their views may actually work for them, and for other students. Art is not science, life is not logic and rationality is not a pursuit of the one truth.

Should I just shut up and focus on graduating? Or would it be unethical of me to just stand by while hundreds are taught to shut off their reasoning skills?

Consider learning empathy (understanding where others come from, why and how). Consider learning humility (accepting that your view might not be the only one worth holding). Consider learning other approaches to life, not necessarily just those based on pure logic. If you manage, you might be surprised by your own personal growth as a human.

Comment by shminux on Stuck Exploration · 2020-02-21T02:23:16.268Z · score: 2 (1 votes) · LW · GW

Was trying to explain, but it looks like I screwed something up in the reformulation :)

Comment by shminux on Stuck Exploration · 2020-02-20T21:28:09.457Z · score: 2 (1 votes) · LW · GW
But coin needs to depend on your prediction instead of being always biased a particular way.

I don't see why, where would the isomorphism break?

Comment by shminux on Stuck Exploration · 2020-02-20T04:39:45.493Z · score: 0 (2 votes) · LW · GW

I am confused about the iterated Parfit's hitchhiker setup. In the one-shot PH agent not only never gets to safety, they die in the desert. So you must have something else in mind. Presumably they can survive the desert in some way (how?) at the cost of lower utility. Realizing that precommitting to not paying results in suboptimal outcomes, the agent would, well, "explore" other options, including precommitting to paying.

If my understanding is correct, then this game is isomorphic to betting on a coin toss:

You are told that you win $1000 if a coin lands heads, and $1 if the coin lands tails. What you do not know is that the coin is 100% biased and always lands tails.

In that less esoteric setup you will initially bet on tails, but after a few losses, realize that the coin is biased and adjust your bet accordingly.

Comment by shminux on [deleted post] 2020-02-17T22:53:55.139Z

It sounds plausible, though needs a lot more evidence than that single source.

Of course, even if proven, that would whip the LGBTQ+ community into a rage, because "you are trying to kill our identity with hormonal meds!!!!" The issue of insurance would come up, too. Why would a transition be covered if a much cheaper option of taking meds that "treat" the underlying gender dysphoria is available. There is also a bit of a misleading statement "once the underlying dysfunction is corrected”, if "corrected" means "taking a life-long medication that changes your natural hormonal balance and causes unknown and potentially dangerous side effects".

Comment by shminux on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-15T23:31:02.385Z · score: 6 (3 votes) · LW · GW

Universities are the Easter Island statues.

Comment by shminux on The Reasonable Effectiveness of Mathematics or: AI vs sandwiches · 2020-02-15T05:38:51.117Z · score: 2 (1 votes) · LW · GW
making sandwiches is a task relatively similar to tasks we had to deal with in the ancestral environment, and in particular there is not a lot of serial depth to the know-how of making sandwiches. If we pluck a person from virtually any culture in any period of history, then it won't be difficult to explain em how to make a sandwich. On the other hand, in the case of AI risk, just understanding the question requires a lot of background knowledge that was built over generations and requires years of study to properly grasp.

If I understand your argument correctly, it implies that dealing with the agents that evolve from simpler than you are to smarter than you are within a few lifetimes ("foom") is not a task that was ever present, or at least not successfully accomplished by your evolutionary ancestors, and hence not incorporated into the intuitive part of the brain. Unlike, say, the task of throwing a rock with the aim of hitting something, which has been internalized and eventually resulted in NBA, with all the required nonlinear differential equations solved by the brain in real time accurately enough, for some of us more so than for others.

Similarly, approximate basic counting is something humans and other animals have done for millions of years, while, say, accurate long division was never evolutionarily important and so requires engaging the conscious parts of the brain just to understand the question ("why do we need all these extra digits and what do they mean?"), even though it is technically much much simpler than calculating the way one's hand must move in order to throw a ball on just the right trajectory.

If this is your argument, then I agree with it (and made a similar one here before numerous times).

Comment by shminux on Demons in Imperfect Search · 2020-02-12T02:14:37.352Z · score: 4 (2 votes) · LW · GW

Being stuck in local minima or in a long shallow valley happens in optimization problems all the time, Isn't this what simulated annealing and similar techniques are designed to correct? I've seen this in maximum likelihood Markov chain discovery problems a lot.

Comment by shminux on Why do we refuse to take action claiming our impact would be too small? · 2020-02-11T01:43:51.828Z · score: 8 (5 votes) · LW · GW
Why do we refuse to take action claiming our impact would be too small?

We don't. Manifestly so. We (or those of us with enough skill to do so) engage others to leverage the impact, increasing it manifold. Examples are abound, most recently Greta Thunberg, but in general every time people organize for a cause. Of course, not everyone can do it, people are all different and it often takes a passionate leader, a champion of a specific cause, to get the ball rolling. But it happens all the time, just check the news.

Comment by shminux on Relationship Outcomes Are Not Particularly Sensitive to Small Variations in Verbal Ability · 2020-02-09T01:40:45.002Z · score: 27 (12 votes) · LW · GW
After a friendship-ending fight

If you find yourself in a fight with a friend, you must have missed about a dozen of alarm bells and opportunities to make better choices by that point... So a postmortem would not be about the last minute verbiage, but about what got you to that point in the first place. In order to avoid repeating the same suboptimal decisions.

Comment by shminux on A Cautionary Note on Unlocking the Emotional Brain · 2020-02-08T22:43:27.822Z · score: 9 (4 votes) · LW · GW

That is really interesting! Sorry to hear that this promising technique backfired. Do you mind sharing any specific examples of what the clash was, what you did, and what "false belief" got stronger, twisting the previously "good" belief?

Comment by shminux on Category Theory Without The Baggage · 2020-02-08T07:58:25.315Z · score: 4 (2 votes) · LW · GW

Trying to write it up better, we'll see if this will work.

Comment by shminux on Plausibly, almost every powerful algorithm would be manipulative · 2020-02-07T02:12:02.051Z · score: 2 (1 votes) · LW · GW

How do you define manipulation?

Comment by shminux on Philosophical self-ratification · 2020-02-05T05:35:55.091Z · score: 4 (2 votes) · LW · GW

Ah, makes sense. So self-ratification is about seeing oneself as trustworthy. Which is neither a necessary condition for a theory to be useful, nor a sufficient condition for it to be trustworthy from outside. But still a useful criterion when evaluating a theory.

Comment by shminux on Meta-Preference Utilitarianism · 2020-02-05T03:27:29.113Z · score: 7 (4 votes) · LW · GW

Humans are not utilitarians, so any kind of utilitarianism is a proxy for what one really wants to achieve, and so pushing it far enough means the tails coming apart and such. You are adding "voting" as a way to close the feedback loop from the unknown values to the metric, but that seems like a sledgehammer, you can just take the vote for each particular example, no need to perform a complicated mixed utilitarian calculation.

Comment by shminux on Philosophical self-ratification · 2020-02-05T03:13:27.792Z · score: 2 (1 votes) · LW · GW

I'm confused about the difference between self-ratification and self-consistency. For example, CDT (as usually described) fights the hypothetical ("there are perfect predictors") in Newcomb's, by assigning non-zero probability to successful two-boxing, which has zero probability in the setup. Since the CDT is blind to this own shortcoming (is it? I assume it is, not sure if there is a formal proof of it, or what it would even mean to write out such a statement somewhat formally.), does it mean it's not self-ratifying? inconsistent? As I said, confused...

Comment by shminux on Category Theory Without The Baggage · 2020-02-04T09:49:27.327Z · score: 9 (5 votes) · LW · GW

It seems like this post was appreciated mostly by those who already understand enough of abstract math to make sense of it, less so by the uninitiated. I've only ever had an intuitive understanding of CT, and my relevant math level is probably just around the first course in algebraic topology, which is still somewhat higher than the LW average, yet I still struggle to keep my interest reading through your post.

For comparison, I wrote a very informal comment on how embedded agent's modeling abstraction levels can map into CT concepts. Which didn't get much traction. But maybe starting with the examples already familiar and relevant to the audience could be something to try when introducing CT to the LW masses.

Comment by shminux on Money isn't real. When you donate money to a charity, how does it actually help? · 2020-02-02T20:42:32.493Z · score: 14 (10 votes) · LW · GW

To paraphrase Sean Carroll, money is as real as baseball.

Does baseball exist? It's nowhere to be found in the Standard Model of particle physics. But any definition of "exist" that can't find room for baseball seems overly narrow to me. It's true that we could take any particular example of a baseball game and choose to describe it by listing the exact quantum state of each elementary particle contained in the players and the bat and ball and the field etc. But why in the world would anyone think that is a good idea? The concept of baseball is emergent rather than fundamental, but it's no less real for all of that.

In the modern human society money is no less real than guns or food. This may change if the society will collapse or change forms, but as things are now, there is nothing illusionary about money. When you donate money to charity, they gain the power to get food, or mosquito nets, or workers on the ground, or anything else that is hard to provide in a direct way.

Comment by shminux on "Memento Mori", Said The Confessor · 2020-02-02T05:15:31.153Z · score: 4 (2 votes) · LW · GW

How do you practice dying and why is it worthwhile?

Comment by shminux on Have epistemic conditions always been this bad? · 2020-01-26T09:12:58.126Z · score: 12 (4 votes) · LW · GW

To answer that one needs to see the endgame of the current upswell. Looking back at a those before this one might be useful. Also, at least in the US, there is a lot of polarization going on, and the forces you have described are not the only ones around. It is not clear that the woke cancel culture will prevail, but quite likely, given that they are the newest force.

Comment by shminux on Coordination as a Scarce Resource · 2020-01-26T04:57:40.858Z · score: 4 (2 votes) · LW · GW

That's a great takeaway! Coordination is indeed an almost universal bottleneck.

Comment by shminux on Constraints & Slackness as a Worldview Generator · 2020-01-26T04:26:34.019Z · score: 7 (4 votes) · LW · GW

Looks like your main point is that an invention that removes a bottleneck ("relaxes a taut constraint") is the one likely to succeed, is that right? Would it mean that in your China example someone inventing, say, banking or lending would make a killing? If so, is that what happened?

Comment by shminux on Have epistemic conditions always been this bad? · 2020-01-26T01:02:52.500Z · score: 34 (13 votes) · LW · GW

it feels that things are getting worse indeed, and maybe they are, but I suspect that we are witnessing one of the classic patterns, where a grassroots movement that starts by speaking truth to power and punching up eventually gains enough momentum to become the power, and gradually shifts to punching down, while still believing that they are an underdog fighting the oppression. Eventually they become a part of the entrenched power structure. Christianity, Lutheranism, Communism etc. are classic examples of that.

During this transition from disenfranchised into the establishment, while the movement still uses the old radical tactics, things generally get worse for basically everyone, because no one is safe from their vicious attacks and these attacks pack all the power of the established structures. Eventually, though not always, the new establishment gets more secure about their position and mellows down in their methods. The weaponized marginalization fades away, only to be employed by the next truly marginalized group, only for the cycle to begin again.

Comment by shminux on Terms & literature for purposely lossy communication · 2020-01-23T08:28:05.559Z · score: 2 (1 votes) · LW · GW

It may be related to statistical mechanics, with the concepts of microstates, macrostates and entropy. In your first example there are 2 microstates per macrostate, so the entropy of the system, as far as your friend is concerned, is log 2 = 1. In your second example there are say, 2^20 pixels 2^5 bit each, and if there are, say, 2^13 different possible distinct pictures that can still be reasonably called "Lion on grass" (a macrostate), then the entropy of "Lion on grass" is 2^22.

Comment by shminux on How Doomed are Large Organizations? · 2020-01-22T05:54:03.861Z · score: 23 (8 votes) · LW · GW

Examples! Which large stable organizations have managed to resist this trend and how? Motorola? IBM? Toyota? Without examples it's just unconvincing words.

Comment by shminux on Book review: Rethinking Consciousness · 2020-01-19T08:36:33.957Z · score: 2 (1 votes) · LW · GW

Actually the superdeterminism models allow for both to be true. There is a different assumption that breaks.

Comment by shminux on "How quickly can you get this done?" (estimating workload) · 2020-01-18T02:02:08.395Z · score: 3 (2 votes) · LW · GW

The standard process is scope->effort->schedule. Estimate the scope of the feature or fix required (usually by defining requirements, writing test cases, listing impacted components etc.), correct for underestimating based on past experience, evaluate the effort required, again, based on similar past efforts by the same team/person. Then and only then you can figure out the duty cycle for this project, and estimate accordingly. Then double it, because even the best people suck at estimating. Then give the range as your answer if someone presses you on it. "This will be between 2 and 4 weeks, given these assumptions. I will provide updated estimates 1 week into the project."

Comment by shminux on Book review: Rethinking Consciousness · 2020-01-17T07:56:09.767Z · score: 2 (1 votes) · LW · GW

Not surprisingly, I have a few issues with your chain of reasoning.

1. I exist. (Cogito, ergo sum). I'm a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.

Cogito is an observation. I am not arguing with that one. Ergo sum is an assumption, a model. The "multiverse" thing is a speculation.

Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We're all just quarks and leptons interacting.

This is very much simplified. Sure, we can do reduction, but that doesn't mean we can do synthesis. There is no guarantee that it is even possible to do synthesis. In fact, there are mathematical examples where synthesis might not be possible, simply because the relevant equations cannot be solved uniquely. I made a related point here. Here is an example. Consciousness can potentially be reduced to atoms, but it may also be reduced to bits, a rather different substrate. Maybe there are other reductions possible.

And it is also possible that constructing consciousness out of quarks and leptons is impossible because of "hard emergence". Of the sorites kind. There is no atom of water. A handful of H2O molecules cannot be described as a solid, liquid or gas. A snowflake requires trillions of trillions of H2O molecules together. There is no "snowflakiness" in a single molecule. Just like there is no consciousness in an elementary particle. There is no evidence for panpsychism, and plenty against it.

Comment by shminux on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T06:03:54.327Z · score: 4 (2 votes) · LW · GW
“Getting out of bed in the morning” and “caring about one’s friends” turn out to be useful for more reasons than Jehovah—but their derivation in the mind of that person was entangled with Jehovah.

Cf: "Learning rationality" and "Hanging out with like-minded people" turn out to be useful for more reasons than AI risk -- but their derivation in the mind of CFAR staff is entangled with AI risk.

Comment by shminux on Predictors exist: CDT going bonkers... forever · 2020-01-15T16:25:03.429Z · score: 2 (1 votes) · LW · GW

That... doesn't seem like a self-consistent decision theory at all. I wonder if any CDT proponents agree with your characterization.

Comment by shminux on Is backwards causation necessarily absurd? · 2020-01-15T05:24:55.373Z · score: 10 (6 votes) · LW · GW
causation might be in the map rather than the territory

Of course it is. There is no atom of causation anywhere. It's a tool for embedded agents to construct useful models in an internally partially predictable universe.

"Backward causation" may or may not be a useful model at times, but it is certainly nothing but a model.

As a trained (though not practicing) physicist, I can see that you are making a large category error here. Relativity neither adds to not subtracts from the causation models. In a deterministic Newtonian universe you can imagine backward causation as a useful tool. Sadly, its usefulness it rather limited. For example, the diffusion/heat equation is not well posed when run backwards, it blows up after a finite integration time. An intuitive way to see that is that you cannot reconstruct the shape of a glass of water from the puddle you see on the ground some time after it was spilled. But in cases where the relevant PDEs are well posed in both time directions, backward causality is equivalent to forward causality, if not computationally, then at least in principle.

All that special relativity gives you is that the absolute temporal order of events is only defined when they are within a lightcone, not outside of it. General relativity gives you both less and more. On the one hand, the Hilbert action is formulated without referring to time evolution at all and poses no restriction on the type of matter sources, be they positive or negative density, subluminal or superluminal, finite or singlular. On the other hand, to calculate most interesting things, one needs to solve the initial value problem, and that one poses various restrictions on what topologies and matter sources one can start with. On the third hand, there is a lot of freedom to define what constitutes "now", as many different spacetime foliations are on equal footing.

If you add quantum mechanics to the mix, the Born rule, needed to calculate anything useful regardless of one's favorite interpretation, breaks linearity and unitarity at the moment of interaction (loosely speaking) and is not time-reversal invariant.

The entropic argument is also without merit: there is no reason to believe that entropy would decrease in a "high-entropy world", whatever that might mean. We do not even know how observer-independent entropy is (Jaynes argued that apparent entropy depends on the observer's knowledge of the world).

Basically, you are confusing map and territory. If backward causality helps you make more accurate maps, go wild, just don't claim that you are doing anything other than constructing models.

Comment by shminux on Predictors exist: CDT going bonkers... forever · 2020-01-15T02:56:19.538Z · score: 2 (1 votes) · LW · GW
Omega will predict their action, and compare this to their actual action. If the two match...

For a perfect predictor the above simplifies to "lose 1 utility", of course. Are you saying that your interpretation of EDT would fight the hypothetical and refuse to admit that perfect predictors can be imagined?

Comment by shminux on Realism about rationality · 2020-01-14T06:13:43.972Z · score: 6 (3 votes) · LW · GW
It seems almost tautologically true that you can't accurately predict what an agent will do without actually running the agent. Because, any algorithm that accurately predicts an agent can itself be regarded as an instance of the same agent.

That seems manifestly false. You can figure out whether an algorithm halts or not without being accidentally stuck in an infinite loop. You can look at the recursive Fibonacci algorithm and figure out what it would do without ever running it. So there is a clear distinction between analyzing an algorithm and executing it. If anything, one would know more about the agent by using the techniques from analysis of algorithms than the agent would ever know about themselves.

Comment by shminux on Moral uncertainty: What kind of 'should' is involved? · 2020-01-14T05:40:06.853Z · score: 2 (1 votes) · LW · GW

30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:

what should I do, given that I don’t know what I should do?

and

what should I do when I don’t know what I should do?

and later a more focused question

what am I (or we) permitted to do, given that I (or we) don’t know what I (or we) are permitted to do

At least they define what they are working on...

Comment by shminux on Moral uncertainty: What kind of 'should' is involved? · 2020-01-14T04:05:08.320Z · score: 2 (3 votes) · LW · GW
What do we mean by “moral uncertainty”?

I was looking for a sentence like "We define moral uncertainty as ..." and nothing came up. Did I miss something?

Comment by shminux on The Arrows of Time · 2020-01-14T03:44:49.969Z · score: 2 (1 votes) · LW · GW
Suppose a universe is made up of 16 quantum particles each of which has two states: 0 and 1. In this sense, the entire universe is just a number like 0b0000000000000000.

Well, if your universe is just two states, its description in the eigenstate basis would be something like A1 exp(iE1 t)|1> + A2 exp(iE2 t), where A1 and A2 are complex and E1 and E2 are real (modulo normalization and phase). I am not sure how this maps into a finite length binary number.


Comment by shminux on On the role of abstraction in mathematics and the natural sciences · 2020-01-14T03:16:41.974Z · score: 7 (4 votes) · LW · GW

Sorry, my spam filter ate your reply notification :(

To "dissolve" the math invented/discovered question, it's a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high "compression ratio" of models of the world. They are as much "out there" in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.

wouldn't a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself?

There might be a circularity, but I do not see one. The chain of reasoning is, as above:

1. There is a somewhat predictable world out there

2. There are (surjective) maps from the world to its parts (models)

3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.

4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.

5. To an embedded agent these commonalities feel like mathematical abstractions.

I do not believe I have used CT to define abstractions, only to meta-model them.

Comment by shminux on (Double-)Inverse Embedded Agency Problem · 2020-01-14T03:05:38.328Z · score: 4 (2 votes) · LW · GW

Right, that's the question. Sure, it is easy to state that "metric must be a faithful representation of the target", but it never is, is it? From the point of view of double inversion, optimizing the target is a hard inverse problem, because, like in your pizza example, the true "values" (pizza is a preference on the background of an otherwise balanced diet) is not easily observable. What would be a double inverse in this case? Maybe something like trying various amounts of pizza and getting the feedback on enjoyment? That would match the long division pattern. I'm not sure.

Comment by shminux on Book review: Rethinking Consciousness · 2020-01-12T02:41:05.876Z · score: 2 (1 votes) · LW · GW

I am not sure how this leads to panpsychism. What are the logical steps there?

Comment by shminux on Book review: Rethinking Consciousness · 2020-01-11T21:55:47.403Z · score: 3 (2 votes) · LW · GW
“Why do I think reality exists?”

Is already answerable. You can list a number of reasons why you hold this belief. You are not supposed to dissolve the new question, only reformulate the original one in a way that is becomes answerable.

why ANY process “feels” anything at all

Is harder because we do not have a good handle on what physical process creates feelings, or in Dennett's approach, how do feelings form. But at least we know what kind of research needs to be conducted in order to make progress in that area. In that way the question is answerable, at least in principle, we are just lacking the good understanding of how human brain works. So the question is ultimately about the neuroscience and the algorithms.

But the hard problem of consciousness is one of the unique exceptions because it deals with subjective experience, specifically why we have subjective experience at all. (It is, in fact, a variant of the first-cause problem.)

That's the "dangling unit" (my grade 8 self says "lol!" at the term) Eliezer was talking about. There are no "unique exceptions", we are algorithms, and some of the artifacts of running our algorithms is to generate "feelings" or "qualia" or "subjective experiences". If this leaves you saying "but... but... but...", then the next quote from Eliezer already anticipates that:

This dangling unit feels like an unresolved question, even after every answerable query is answered.  No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering:  "But does the falling tree really make a sound, or not?"

Comment by shminux on Is there a moral obligation to respect disagreed analysis? · 2020-01-11T19:09:17.936Z · score: 2 (1 votes) · LW · GW

When in doubt, follow the Golden Rule.

Comment by shminux on Why Quantum? · 2020-01-11T06:46:07.572Z · score: 13 (4 votes) · LW · GW

I've read through the whole Quantum Physics Sequence once or twice, and whenever Eliezer talks about actual science, it is popularized, but not wrong. Some parts are explained really nicely, too. Unfortunately, those are the parts that are also irrelevant to learning rationality, the whole impetus for Eliezer writing the sequence. And the moment he goes into MWI apologia, for lack of a better word, it all goes off the rails, there is no more science, just persuasion. To be fair, he is not alone in that. Sean Carroll, an excellent physicist from whose lecture notes I had learned General Relativity, has published a whole book pushing the MWI onto the unsuspecting public.

One area where the Quantum Physics sequence is useful for rationality is exposing how weird and counter-intuitive the world is, and feeling humbled about one's own stated and unstated wrong assumptions and conclusions, something we humans are really bad at. Points like "All electrons are the same. This one here and that one there" "Actually, there are no electrons, just fields that sometimes look like electrons".

Where the sequence fails utterly in my view is the pseudo-scientific discussions about "world thickness" and the fictional narratives about it.

Comment by shminux on Book review: Rethinking Consciousness · 2020-01-11T03:54:08.146Z · score: 6 (3 votes) · LW · GW

And even simpler summary in a follow-up post Righting a Wrong Question:

When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.
Compare:
  • "Why do I have free will?"
  • "Why do I think I have free will?"
Comment by shminux on Book review: Rethinking Consciousness · 2020-01-11T03:49:57.453Z · score: 8 (4 votes) · LW · GW

This reminds me of the Eliezer's classic post Dissolving the Question.

From your post:

The "hard problem of consciousness" is "why is there an experience of consciousness; why does information processing feel like anything at all?"
The "meta-problem of consciousness" is "why do people believe that there's a hard problem of consciousness?"

From Eliezer's post:

Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X.  Such are mere attempts to explain why people believe in "free will", not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.

Is there anything else to the book you review beyond what Eliezer captured back 12 years ago?

Comment by shminux on (Double-)Inverse Embedded Agency Problem · 2020-01-09T01:41:26.181Z · score: 2 (1 votes) · LW · GW

Looking for "functions that don't exhibit Goodhart effects under extreme optimization" might be a promising area to look into. What does it mean for a function to behave as expected under extreme optimization? Can you give a toy example?

Comment by shminux on (Double-)Inverse Embedded Agency Problem · 2020-01-08T15:44:33.020Z · score: 3 (2 votes) · LW · GW

haha, oops.

Comment by shminux on Voting Phase of 2018 LW Review · 2020-01-08T04:38:46.117Z · score: 2 (1 votes) · LW · GW

The link is 404 for me.

Comment by shminux on [Book Review] The Trouble with Physics · 2020-01-06T08:42:25.849Z · score: 3 (2 votes) · LW · GW
Their domain is supposed to be the universe, I think. Later people said GR is for the large scale and QM is for the small scale but nothing in the theories actually says this, AFAICT.

each one was constructed for their respective domains. Not surprising that they don't automatically keep their validity in other domains. Quantum mechanics came with their own limiter, the ad hoc Born rule without which it doesn't predict anything. GR is too weak for small source masses, so we have no idea when and if it stops applying.

To be fair one key problem is a lack of data. If we could build accelerators 10^12 times as powerful as current ones, we may have something to work on. But there are so many possible theories given current data. Given no data, and no way to test theories, physics degenerates into a popularity contest.

Indeed we need more data, but not necessarily at high energies. If anything, measuring gravitational effects of the sources that contain 100,000 nucleons, not 10^23 nucleons would be more illuminating than a super mega LHC. Or gravitational effects of any system that can be put into spatial quantum superposition (i.e. not just a SQUID).