Mask wearing: do the opposite of what the CDC/WHO has been saying? 2020-04-02T22:10:31.126Z · score: 11 (5 votes)
Good News: the Containment Measures are Working 2020-03-17T05:49:12.516Z · score: 26 (6 votes)
(Double-)Inverse Embedded Agency Problem 2020-01-08T04:30:24.842Z · score: 25 (9 votes)
Since figuring out human values is hard, what about, say, monkey values? 2020-01-01T21:56:28.787Z · score: 36 (13 votes)
A basic probability question 2019-08-23T07:13:10.995Z · score: 11 (2 votes)
Inspection Paradox as a Driver of Group Separation 2019-08-17T21:47:35.812Z · score: 32 (14 votes)
Religion as Goodhart 2019-07-08T00:38:36.852Z · score: 21 (8 votes)
Does the Higgs-boson exist? 2019-05-23T01:53:21.580Z · score: 7 (10 votes)
A Numerical Model of View Clusters: Results 2019-04-14T04:21:00.947Z · score: 18 (6 votes)
Quantitative Philosophy: Why Simulate Ideas Numerically? 2019-04-14T03:53:11.926Z · score: 23 (12 votes)
Boeing 737 MAX MCAS as an agent corrigibility failure 2019-03-16T01:46:44.455Z · score: 50 (23 votes)
To understand, study edge cases 2019-03-02T21:18:41.198Z · score: 27 (11 votes)
How to notice being mind-hacked 2019-02-02T23:13:48.812Z · score: 16 (8 votes)
Electrons don’t think (or suffer) 2019-01-02T16:27:13.159Z · score: 2 (9 votes)
Sabine "Bee" Hossenfelder (and Robin Hanson) on How to fix Academia with Prediction Markets 2018-12-16T06:37:13.623Z · score: 11 (3 votes)
Aligned AI, The Scientist 2018-11-12T06:36:30.972Z · score: 12 (3 votes)
Logical Counterfactuals are low-res 2018-10-15T03:36:32.380Z · score: 22 (8 votes)
Decisions are not about changing the world, they are about learning what world you live in 2018-07-28T08:41:26.465Z · score: 36 (21 votes)
Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. 2018-07-12T06:52:19.440Z · score: 24 (12 votes)
The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? 2018-07-08T21:18:20.358Z · score: 47 (20 votes)
Wirehead your Chickens 2018-06-20T05:49:29.344Z · score: 75 (46 votes)
Order from Randomness: Ordering the Universe of Random Numbers 2018-06-19T05:37:42.404Z · score: 16 (5 votes)
Physics has laws, the Universe might not 2018-06-09T05:33:29.122Z · score: 28 (14 votes)
[LINK] The Bayesian Second Law of Thermodynamics 2015-08-12T16:52:48.556Z · score: 8 (9 votes)
Philosophy professors fail on basic philosophy problems 2015-07-15T18:41:06.473Z · score: 16 (21 votes)
Agency is bugs and uncertainty 2015-06-06T04:53:19.307Z · score: 16 (19 votes)
A simple exercise in rationality: rephrase an objective statement as subjective and explore the caveats 2015-04-18T23:46:49.750Z · score: 21 (22 votes)
[LINK] Scott Adam's "Rationality Engine". Part III: Assisted Dying 2015-04-02T16:55:29.684Z · score: 7 (8 votes)
In memory of Leonard Nimoy, most famous for playing the (straw) rationalist Spock, what are your top 3 ST:TOS episodes with him? 2015-02-27T20:57:19.777Z · score: 10 (15 votes)
We live in an unbreakable simulation: a mathematical proof. 2015-02-09T04:01:48.531Z · score: -31 (42 votes)
Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later. 2014-08-28T23:37:06.430Z · score: 19 (19 votes)
[LINK] Could a Quantum Computer Have Subjective Experience? 2014-08-26T18:55:43.420Z · score: 16 (17 votes)
[LINK] Physicist Carlo Rovelli on Modern Physics Research 2014-08-22T21:46:01.254Z · score: 6 (11 votes)
[LINK] "Harry Potter And The Cryptocurrency of Stars" 2014-08-05T20:57:27.644Z · score: 2 (4 votes)
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient 2014-07-04T20:00:48.176Z · score: 37 (37 votes)
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy 2014-06-23T19:09:54.047Z · score: 10 (12 votes)
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality 2014-06-19T20:17:14.063Z · score: 20 (20 votes)
List a few posts in Main and/or Discussion which actually made you change your mind 2014-06-13T02:42:59.433Z · score: 16 (16 votes)
Mathematics as a lossy compression algorithm gone wild 2014-06-06T23:53:46.887Z · score: 40 (43 votes)
Reflective Mini-Tasking against Procrastination 2014-06-06T00:20:30.692Z · score: 17 (17 votes)
[LINK] No Boltzmann Brains in an Empty Expanding Universe 2014-05-08T00:37:38.525Z · score: 9 (11 votes)
[LINK] Sean Carroll Against Afterlife 2014-05-07T21:47:37.752Z · score: 5 (9 votes)
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology" 2014-02-25T00:56:34.368Z · score: 8 (8 votes)
Are you a virtue ethicist at heart? 2014-01-27T22:20:25.189Z · score: 11 (13 votes)
LINK: AI Researcher Yann LeCun on AI function 2013-12-11T00:29:52.608Z · score: 2 (12 votes)
As an upload, would you join the society of full telepaths/empaths? 2013-10-15T20:59:30.879Z · score: 7 (17 votes)
[LINK] Larry = Harry sans magic? Google vs. Death 2013-09-18T16:49:17.876Z · score: 25 (31 votes)
[Link] AI advances: computers can be almost as funny as people 2013-08-02T18:41:08.410Z · score: 7 (9 votes)
How would not having free will feel to you? 2013-06-20T20:51:33.213Z · score: 6 (14 votes)
Quotes and Notes on Scott Aaronson’s "The Ghost in the Quantum Turing Machine" 2013-06-17T05:11:29.160Z · score: 18 (22 votes)


Comment by shminux on Are aircraft carriers super vulnerable in a modern war? · 2020-09-20T23:07:25.098Z · score: 2 (1 votes) · LW · GW

Not an expert, but they seem to be less useful in a conflict with a nuclear power that possesses something like this:

Comment by shminux on God in the Loop: How a Causal Loop Could Shape Existence · 2020-09-16T04:38:56.293Z · score: 3 (2 votes) · LW · GW
If you do travel back to the past though, you may find yourself travelling along a different timeline after that

No, that's not how it works. That's not how any of this works. If you are embedded in a CTC, there is no changing that. There is no escaping the groundhog day, or even realizing that you are stuck in one. You are not Bill Murray, you are an NPC.

And yes, our universe is definitely not a Godel universe in any way. The Godel universe is isotropic and stationary, while our universe is of the FRW-de Sitter type, the best we can tell.

More generally, knowledge about the system, or memory, as well as the ability to act upon it to rearrange information. In fact, if an agent has perfect knowledge of a system, it can rearrange it in any way it desires.

Indeed, but it would not be an embedded agent, but something from outside the Universe, at which point you might as well say "God/Simulator/AGI did it" and give up.

if we assume our universe is a causal loop, but it is not a CTC

That is incompatible with classical GR, the best I can glean. The philosophy paper is behind a paywall (boo!), and it's by a philosopher, not a physicist, apparently, so can be safely discounted (this attitude goes both ways, of course).

From that point on in your post, it looks like you are basically throwing **** against the wall and seeing what sticks, so I stopped trying to understand your logic.

Life doesn’t just veer off the rails into oblivion; it’s locked on a path, or lots of equivalent paths that are all destined to tell the same story — the same universal archetype. The loop cannot be broken, else it would have never existed. Life is bound to persist, bound to overcome, bound to exist again

To quote the classic movie, "Life, uh, finds a way". Which is a nice and warm sentiment, but nothing more.

But, if your goal is a search for God, then 10/10 for rationalization.

Comment by shminux on Egan's Theorem? · 2020-09-14T00:10:11.158Z · score: 5 (2 votes) · LW · GW
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well - i.e. most of the macroscopic world.

Well, that's false. The details of quantum to classical transition are very much an open problem. Something happens after the decoherence process removes the off-diagonal elements from the density matrix, and before only a single eigenvalue remains; the mysterious projection postulate. We have no idea at what scales it becomes important and in what way. The original goal was to explain new observations, definitely. But it was not "to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well".

Your other examples is more in line with what was going on, such as

for special and general relativity - they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work

That program worked out really well. But that is not a universal case by any means. Sometimes new models don't work in the old areas at all. The free will or the consciousness models do not reproduce physics or vice versa.

The way I understand the "it all adds up to normality" maxim (not a law or a theorem by any means), is that new models do not make your old models obsolete where the old models worked well, nothing more.

I have trouble understanding what you would want from what you dubbed the Egan's theorem. In one of the comment replies you suggested that the same set of observations could be modeled by two different models, and there should be a morphism between the two models, either directly or through a third model that is more "accurate" or "powerful" in some sense than the other two. If I knew enough category theory, I would probably be able to express it in terms of some commuting diagrams, but alas. But maybe I misunderstand your intent.

Comment by shminux on Gems from the Wiki: Acausal Trade · 2020-09-13T19:43:43.780Z · score: 2 (1 votes) · LW · GW

I was trying to understand the point of this, and it looks like it is summed up in

Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.

Isn't it your basic Max EV that is in the core of all decision theories and game theories? The "acausal" part is using the intentional stance for modeling the parts of the universe that are not directly observable, right?

Comment by shminux on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-12T06:02:18.325Z · score: 4 (2 votes) · LW · GW
I think the most plausible explanation is that scientists don't read the papers they cite

Indeed. Reading an abstract and skimming intro/discussion is as far as it goes in most cases. Sometimes it's just the title that is enough to trigger a citation. Often it's "reciting", copying the references from someone else's paper on the topic. My guess is that maybe 5% of references in a given paper have actually been read by the authors.

Comment by shminux on ‘Ugh fields’, or why you can’t even bear to think about that task (Rob Wiblin) · 2020-09-12T00:47:31.412Z · score: 0 (2 votes) · LW · GW

Is there a more standard terminology in psychology for this phenomenon? "Ugh field" feels LW-cultish.

Comment by shminux on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-11T05:49:39.881Z · score: 4 (2 votes) · LW · GW
So unless you are willing to commit that not only there is no reliable way to assign a prior, but also assigning a probability in this situation is invalid in itself

Indeed. If you have no way to assign a prior, probability is meaningless. And if you try, you end up with something as ridiculous as the Doomsday argument.

Comment by shminux on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-10T00:52:23.326Z · score: 5 (3 votes) · LW · GW

Note that speaking of probabilities only makes sense if you start with a probability distribution over outcomes.

In the firing squad setup we have an a priori probability distribution is something like 99% dead vs 1% alive without a collusion to miss, and probably the opposite with the collusion to miss. So the Bayesian update gives you high probability of collusion to miss. This matches the argument you presented here.

In the fine tuning argument we have no reliable way to create an a priori probability distribution. We don't know enough physics to even guess reliably. Maybe it's the uniform distribution of some "fundamental" constants. Maybe it's normal or log-normal. Maybe it's not even a distribution of the constants, but something completely different. Maybe it's Knightean. Maybe it's the intelligent designer/simulator. There is no hint from quantum mechanics, relativity, string theory, loop quantum gravity or any other source. There is only this one universe we observe, that's it. Thus we cannot use Bayesian updating to make any useful conclusions, whether about fine tuning or anything else. Whether this matches your argument, I am not clear.

Comment by shminux on The ethics of breeding to kill · 2020-09-07T21:24:07.471Z · score: 4 (2 votes) · LW · GW

If you talk to a real vegan, their ethical argument will likely be "do not create animals in order to kill and eat them later", period. Any discussion of the quality of life of the farm animal is rather secondary. This is your second argument, basically. The justification is not based on what the animals feel, or on their quality of life, but on what it means to be a moral human being, which is not a utilitarian approach at all. So, none of your utilitarian arguments are likely to have much effect on an ethical vegan. Note that rationalist utilitarian people here are not too far from that vegan, or at least that's my conclusion from the comments to my post Wirehead Your Chickens.

Comment by shminux on Why is Bayesianism important for rationality? · 2020-09-02T02:17:29.721Z · score: 4 (2 votes) · LW · GW


a kind of group mind that is created when people consciously come together for a common purpose
Comment by shminux on Why is Bayesianism important for rationality? · 2020-09-01T06:54:31.663Z · score: 11 (5 votes) · LW · GW

(Not speaking for Eliezer, obviously.) "Carefully adjusting one's model of the world based on new observations" seems like the core idea behind Bayesianism in all its incarnations, and I'm not sure if there is much more to it than that. The stronger the evidence, the more signifiant the update, yada-yada. It seems important to rational thinking because we all tend to fall into the trap of either ignoring evidence we don't like or being overly gullible when something sounds impressive. Not that it helps a lot, way too many "rationalists" uncritically accept the local egregores and defend them like a religion. But the allegiance to an ingroup is emotionally stronger than logic, so we sometimes confuse rationality with rationalization. Still, relative to many other ingroups this one is not bad, so maybe Bayesianism does its thing.

Comment by shminux on nostalgebraist: Recursive Goodhart's Law · 2020-08-27T06:28:51.844Z · score: 4 (2 votes) · LW · GW

In this situation Goodhart is basically open-loop optimization. An EE analogy would be a high gain op amp with no feedback circuit. The result is predictable: you end up optimized out of the linear mode and into saturation.

You can't explicitly optimize for something you don't know. And you don't know what you really want. You might think you do, but, as usual, beware what you wish for. I don't know if an AI can form a reasonable terminal goal to optimize, but humans surely cannot. Given that some 90% of our brain/mind is not available to introspection, all we have to go by is the vague feeling of "this feels right" or "this is fishy but I cannot put my finger on why". That's why cautiously iterating with periodic feedback is so essential, and open-loop optimization is bound to get you to all the wrong places.

Comment by shminux on On Suddenly Not Being Able to Work · 2020-08-26T01:20:16.377Z · score: 5 (3 votes) · LW · GW

It looks like you've got an anxiety flareup every time you try to work. Anxiety is not necessarily presented as fast heartbeat, hyperventilation or any other easily measurable symptom. I have seen it aplenty in myself and others. Often the issue is not enough slack, but the way you describe it, it seems you have plenty, but maybe not the mental kind.

One approach that I have seen to help is to do a "15 min work". Not a pomodoro, though! Those imply lots of structured work and short breaks. Just... "I will write this code for 15 min" or "I will edit this post for 15 min", no further obligations, no pressure. If you stop after 15 min, it's still an accomplishment, if you decide to keep going for a time, that's fine, too. But stop when you get the same feeling again, and do something more fun. Once the internal pressure goes away, think when you can do another "15 min, no obligations past that" work.

Comment by shminux on Why don't countries, like companies, more often merge? · 2020-08-23T02:36:44.589Z · score: 6 (3 votes) · LW · GW


Comment by shminux on GPT-3, belief, and consistency · 2020-08-17T02:32:49.305Z · score: 2 (1 votes) · LW · GW
It's not quite the same, because if you're confused and you notice you're confused, you can ask.

You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3's beliefs are more belief-like than those of humans.

Comment by shminux on Many-worlds versus discrete knowledge · 2020-08-14T04:16:46.636Z · score: 3 (5 votes) · LW · GW
For this reason, I significantly prefer the Bohm interpretation over the many-worlds interpretation

Preferences do not make science. Philosophy, for sure.

Odds are, once mesoscopic quantum effects become accessible to experiment, we will find that none of the interpretational models reflect the observations well. I would put 10:1 odds that the energy difference of entangled states cannot exceed about one Planck mass, a few micrograms. Whether there is a collapse of some sort, hidden variables, superdeterminism, who knows.

Anyway, in general I find this approach peculiar, picking a model based on emotional reasoning like "I like indexicality", or "String theory is pretty". It certainly can serve as a guide of what to put one's efforts in as a promising research area, but it's not a matter of preference, the observations will be the real arbiter.

Comment by shminux on Fantasy-Forbidding Expert Opinion · 2020-08-10T15:22:37.053Z · score: 4 (3 votes) · LW · GW

Right, that makes sense. One reference class is "does not exist except in a fantasy" and the other "do not try it on yourself until there is reliable published research".

Comment by shminux on Fantasy-Forbidding Expert Opinion · 2020-08-10T07:20:29.042Z · score: 10 (5 votes) · LW · GW

Hmm, fairies and trolls are not at all like a "vitamin X". There are plenty of supplements that are known to have real positive effect in many cases. And we still know so little about human body and mind that there could be still plenty of low-hanging fruit waiting to be plucked. As for fairies and trolls, we know that these are artifacts of the human tendency to anthropomorphize everything, and there is not a single member of the reference class "not human but human-like in appearance and intelligence". We also understand enough of evolution to exclude, with high confidence, species like that. (Including humanoid aliens, whether in appearance or in a way of thinking.) But we cannot convincingly state that some extract of an exotic plant or animal from the depth of the rainforest or the ocean would not prove to have, say, a health boost on a human. The odds are not good, but immeasurably better than those of finding another intelligence, on this planet or elsewhere.

Comment by shminux on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-10T00:16:19.222Z · score: 2 (1 votes) · LW · GW

Ah, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below.

Regarding "subjunctive dependency" from the post linked in your other reply:

I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm).

The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively".

Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe.

Clearly your model is different from the above, since you seriously think about untestables and unaffectables.

Comment by shminux on What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse? · 2020-08-09T19:03:30.377Z · score: 7 (3 votes) · LW · GW

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.

Comment by shminux on What is filling the hole left by religion? · 2020-08-08T05:39:35.142Z · score: 2 (1 votes) · LW · GW
They start asking why so how do I explain?

Then it's no longer pure one-shot, I agree. But there are plenty of cases where defection can be done with impunity, and personal conscience is all that is keeping one from it. I doesn't have to be financial, either.

Comment by shminux on What is filling the hole left by religion? · 2020-08-05T02:02:20.728Z · score: 2 (1 votes) · LW · GW
Just when are we really in a one-shot PD setting?

Any situation where the defection would not be uncovered, can be treated as one-shot PD. Theft, slacking off, you name it.

Comment by shminux on What is filling the hole left by religion? · 2020-08-04T15:56:51.538Z · score: 4 (4 votes) · LW · GW

Morality is what prevents social animals from defecting in the one-shot PDs. Society is what prevents social animals from defecting in iterated PDs. So morality is an evolutionary adaptation. In higher animals like primates and whales, where culture is essential, morality is largely learned (including by reading books). In lower animals, like dogs and bees, it is largely innate. The cultural behavioral norms are malleable and depend on the society, thus theft/violence/murder can be acceptable in some cases and prohibited in others, there are no absolutes.

Everything else, like religion, is just elaborate fluff on top.

Comment by shminux on [deleted post] 2020-08-01T01:06:26.545Z

People argue about the Newcomb's paradox because they implicitly bury the freedom of choice of both the agent and the predictor in various unstated assumptions. Counterfactual approach is a prime example of it. For a free-will-free treatment of Newcomb's and other decision theory puzzles, see my old post.

Comment by shminux on Would a halfway copied brain emulation be at risk of having different values/identity? · 2020-07-30T06:05:10.633Z · score: 5 (3 votes) · LW · GW

I'd think that brain is more like a hologram. Copying a small part would result in a dimmer and less resolved, but still a complete image. That said, I also don't see an ethical issue in copying inactive brain "trait-by-trait".

Comment by shminux on New Paper on Herd Immunity Thresholds · 2020-07-30T05:09:52.057Z · score: 2 (5 votes) · LW · GW

I thought that the data show that the immunity lasts maybe 2-3 months? If so, we will never get to 10%

Comment by shminux on What Failure Looks Like: Distilling the Discussion · 2020-07-30T05:03:15.117Z · score: 5 (3 votes) · LW · GW

Like others, I find "failure by enemy action" more akin to a malicious human actor injecting some extra misalignment into an AI whose inner workings are already poorly understood. Which is a failure mode worth worrying about, as well.

Comment by shminux on What a 20-year-lead in military tech might look like · 2020-07-29T21:14:52.715Z · score: 2 (1 votes) · LW · GW

I agree that we already have the makings of removing humans from the battlefield. Fighters, bombers, drones, all can be controlled remotely (especially once the starlink-like constellations are up) or semi-autonomously. General Dynamics-like robots can certainly be deployed as a substitute for boots on the ground, not too far down the road. And, given the unexpected advances in AI through GPT text/image generators, this tech becomes uncomfortably close to the Terminator-like scenarios.

Comment by shminux on Billionaire Economics · 2020-07-28T04:20:42.913Z · score: 11 (12 votes) · LW · GW

You can't do anything meaningful on the country-wide scale for $20bln. The US social security budget is over a trillion dollars, the highest budget expense, and it is not nearly enough. Your "progressive" (read ideologically left) friends are posting it for the feel-good ingroup applause lights without ever checking the sources. Not very much different from "covid is a Chinese/Liberal conspiracy".

Comment by shminux on A Natural Explanation of Nash Equilibria · 2020-07-26T21:53:20.580Z · score: 2 (1 votes) · LW · GW

Just a general comment that humans and most other social animals have evolved internal mechanisms for correcting the PD-type payoff matrix even in absence of an external threat of punishment for defection. We often call it "honor" or "conscience". It is not foolproof, but it tends to function well enough to prevent the ingroup collapse.

Comment by shminux on Access to AI: a human right? · 2020-07-25T17:43:24.049Z · score: 4 (2 votes) · LW · GW
Instead, you're greeted with a message : "Apple has suspended your computer access due to violation of terms".

More likely you will not notice anything unusual, except that no one seems to read your stories anymore, and the old ones do not show up in search. That's how it works now, with shadowbans on reddit and twitter.

This is just an illustration of the kind of power organisations would wield if they controlled our access to advanced computing.

Or to the regular online interactions, the way it already is now, and no one bats an eye.

I feel a decentralised AI, to which everyone has equal access at the same price, is the need of the hour.

Not yet, though it might become a de facto essential service one day. However, you can't Marx your way there without breaking more than you fix. Definitely not through legislation. If you care about more universal access to AI tools, you work on creating more accessible tools.

Comment by shminux on The Basic Double Crux pattern · 2020-07-23T04:14:47.905Z · score: 4 (3 votes) · LW · GW

Maybe we live in different bubbles.

Comment by shminux on The Basic Double Crux pattern · 2020-07-23T00:58:11.241Z · score: 2 (1 votes) · LW · GW
Well if you found out that tea actually didn’t cause cancer, would you be fine with people drinking tea?

In my experience that's where most attempts to get to a mutual understanding fail. People refuse to entertain a hypothetical that contradicts their deeply held beliefs, period. Not just "those irrational people", but basically everyone, you and me included. If the belief/alief in question is a core one, there is almost no chance of us to earnestly consider that it might be false, not in a single conversation, anyway.

An example:

"What if you found out that vaccines caused autism?" "They don't, the only study claiming this was decisively debunked." "But just imagine if they did" "Are you trolling? We know they don't!"

An opposite example:

"What if you found out that vaccines didn't cause autism?" "They do, it's a conspiracy by the pharma companies and the government, they poison you with mercury." "Just for the sake of argument, what if they didn't?" "You are so brainwashed by the media, you need to open your eyes to reality!"

Comment by shminux on Uncalibrated quantum experiments act clasically · 2020-07-22T04:34:03.516Z · score: 2 (1 votes) · LW · GW

The way probabilities are calculated is quite constrained. The Kochen–Specker theorem might be a worthwhile place to start from.

Comment by shminux on Uncalibrated quantum experiments act clasically · 2020-07-22T01:18:11.088Z · score: 3 (2 votes) · LW · GW

Taking on the Borne rule is a tall order. It's up there with P vs NP by the amount of effort expended by extremely smart people. It is quite likely that it would emerge from integrating general relativity with quantum mechanics somewhere at the level where irreversible classicality emerges, probably for the objects on the order of the Planck mass (10^19 atoms).

Comment by shminux on Human-AI Interaction · 2020-07-22T01:10:54.855Z · score: 2 (1 votes) · LW · GW

We raise children to satisfy their expected well being, not their naive preferences (for chocolate and toys), and that seems similar to what a smarter-than-human AI would do to/for us. Which was my point.

Comment by shminux on Anthropomorphizing Humans · 2020-07-18T02:16:21.783Z · score: 2 (1 votes) · LW · GW

I agree that this misattribution is a very real and common thing, but I question whether your meta description is a useful one. Yes, we misdiagnose crabbiness when it's caused by hunger or tiredness. But then we constantly misdiagnose every complex system, not just humans.

Comment by shminux on A future for neuroscience · 2020-07-18T02:03:23.607Z · score: 4 (2 votes) · LW · GW

2 years later and 3 years since the publication of the original results, is there anything new to report?

Comment by shminux on Null-boxing Newcomb’s Problem · 2020-07-14T04:59:08.711Z · score: 5 (3 votes) · LW · GW

Following Scott Aaronson's The Ghost in the Quantum Turing Machine, section 2 being a required reading for an aspired rationalist, it makes sense to count multiple identical copies having the same value as a single one, since they add no new information to the world after the original. In this approach the original will not notice any difference after taking the box, and neither would the simulation, so there is no benefit in not taking the boxes, since the original will be a million dollar poorer.

Comment by shminux on Was a PhD necessary to solve outstanding math problems? · 2020-07-11T16:58:30.810Z · score: 4 (3 votes) · LW · GW

Just to TL:DR my comment above: to get a PhD is many times easier than to "accomplish groundbreaking work", so if the former is an issue, you will never do the latter.

Comment by shminux on Was a PhD necessary to solve outstanding math problems? · 2020-07-10T23:54:59.548Z · score: 8 (6 votes) · LW · GW

There have been undergrad and grad students who had solved an open math problem before they got their PhD, but for them getting a PhD was not even a question they consider, it's just something that naturally happens. It's not about credentialism, it's about being smart enough, creative enough and hard-working enough to outdo the rest of the very crowded field in a particular area. If you are all that, writing up a PhD thesis is a minor step. And if you are not all that, why pick a field like math to begin with?

Comment by shminux on Should I take an IQ test, why or why not? · 2020-07-10T23:43:13.499Z · score: 4 (2 votes) · LW · GW

Take the and it will give you the ballpark of your intelligence level. It does not correlate with your success in life, but if you get under 120, you are likely to have a hard time competing for a job in STEM academia.

Comment by shminux on Causality and its harms · 2020-07-07T02:33:43.166Z · score: 2 (1 votes) · LW · GW
If causation is understood in terms of counterfactuals — X would have happened if Y had happened — then there is still a difference between cause and effect. A model of a world implies models of hypothetical, counterfactual worlds.

Yes, indeed, in terms of counterfactuals there is. But counterfactuals are in the map (well, to be fair a map is a tiny part of the territory in the agent's brain). Which was my original point: causality is in the map.

Comment by shminux on Causality and its harms · 2020-07-06T01:58:00.328Z · score: 2 (1 votes) · LW · GW
And yet, there is some underlying physical process which drives our ability to model the world with the idea that things cause other things and we might reasonably point to it and say it is the real causality, i.e. the aspect of existence that we perceive as change.

Hmm. Imagine the world as fully deterministic. Then there is no "real causality" to speak of, everything is set in stone, and there is no difference between cause and effect. The "underlying physical process which drives our ability to model the world with the idea that things cause other things" are essential in being an embedded agent, since agency equals a perceived world optimization, which requires, in turn, predictability (from the inside the world), but I don't think anyone has a good handle on what "predictability from inside the world" may look like. Off hand, it means that there is a subset of the world that runs a coarse-grained simulation of the world, but how do you recognize such a simulation without already knowing what you are looking for? Anyway, this is a bit of a tangent.

Comment by shminux on Causality and its harms · 2020-07-05T02:16:45.673Z · score: 7 (4 votes) · LW · GW

TL;DR: Causality is an abstraction, a feature of our models of the world, not of the world itself, and sometimes it is useful, but other times not so much. Notice when it's not useful and use other models.

Comment by shminux on How to decide to get a nosejob or not? · 2020-07-03T05:02:45.499Z · score: 2 (1 votes) · LW · GW

The looks can be changed a lot with some judicial makeup. Consider watching some youtube videos on contouring, and trying it out, either by yourself or with some professional help. See if you notice any change in how others see you and relate to you.

Comment by shminux on Atemporal Ethical Obligations · 2020-06-27T03:11:21.409Z · score: 2 (1 votes) · LW · GW

Then I'm guessing that you are explicitly or implicitly a moral realist...

Comment by shminux on Atemporal Ethical Obligations · 2020-06-27T00:21:12.813Z · score: 15 (7 votes) · LW · GW
JK Rowling isn’t even dead yet, and beliefs that would have put her at the liberal edge of the feminist movement thirty years ago are now earning widespread condemnation.

If you think that cancel culture is progress in morality, the future will judge you harshly, if acausally.

Comment by shminux on Probability interpretations: Examples · 2020-06-20T23:33:27.146Z · score: 2 (1 votes) · LW · GW

Right, never mind, for a moment what your discourse style is. Disengaging.

Comment by shminux on Probability interpretations: Examples · 2020-06-20T22:30:45.628Z · score: 2 (1 votes) · LW · GW
Determinism is defined in terms of inevitability, ie. lack of possible alternatives. We do not regard the future as undetermined just because it has not happened yet.

I don't argue with that, in fact, the statement above makes my point: there is no difference between an as-yet-unknown to you (but predetermined) digit of pi and anything else that is not yet known to you, like the way a coin lands when you flip it.