One night, without sleep 2018-08-16T17:50:06.036Z · score: 16 (9 votes)
Anthropics and a cosmic immune system 2013-07-28T09:07:19.427Z · score: -8 (33 votes)
Living in the shadow of superintelligence 2013-06-24T12:06:18.614Z · score: 0 (21 votes)
The ongoing transformation of quantum field theory 2012-12-29T09:45:55.580Z · score: 22 (37 votes)
Call for a Friendly AI channel on freenode 2012-12-10T23:27:08.618Z · score: 7 (14 votes)
FAI, FIA, and singularity politics 2012-11-08T17:11:10.674Z · score: 12 (23 votes)
Ambitious utilitarians must concern themselves with death 2012-10-25T10:41:41.269Z · score: 4 (19 votes)
Thinking soberly about the context and consequences of Friendly AI 2012-10-16T04:33:52.859Z · score: 9 (40 votes)
Debugging the Quantum Physics Sequence 2012-09-05T15:55:53.054Z · score: 32 (54 votes)
Friendly AI and the limits of computational epistemology 2012-08-08T13:16:27.269Z · score: 18 (53 votes)
Two books by Celia Green 2012-07-13T08:43:11.468Z · score: -9 (18 votes)
Extrapolating values without outsourcing 2012-04-27T06:39:20.840Z · score: 7 (22 votes)
A singularity scenario 2012-03-17T12:47:17.808Z · score: 6 (23 votes)
Is causal decision theory plus self-modification enough? 2012-03-10T08:04:10.891Z · score: -4 (17 votes)
One last roll of the dice 2012-02-03T01:59:56.996Z · score: 1 (41 votes)
State your physical account of experienced color 2012-02-01T07:00:39.913Z · score: -1 (31 votes)
Does functionalism imply dualism? 2012-01-31T03:43:51.973Z · score: -1 (30 votes)
Personal research update 2012-01-29T09:32:30.423Z · score: 4 (45 votes)
Utopian hope versus reality 2012-01-11T12:55:45.959Z · score: 23 (36 votes)
On Leverage Research's plan for an optimal world 2012-01-10T09:49:40.086Z · score: 25 (42 votes)
Problems of the Deutsch-Wallace version of Many Worlds 2011-12-16T06:55:55.479Z · score: 4 (15 votes)
A case study in fooling oneself 2011-12-15T05:25:52.981Z · score: -2 (60 votes)
What a practical plan for Friendly AI looks like 2011-08-20T09:50:23.686Z · score: 1 (22 votes)
Rationality, Singularity, Method, and the Mainstream 2011-03-22T12:06:16.404Z · score: 38 (47 votes)
Who are these spammers? 2011-01-20T09:18:10.037Z · score: 5 (8 votes)
Let's make a deal 2010-09-23T00:59:43.666Z · score: -18 (35 votes)
Positioning oneself to make a difference 2010-08-18T23:54:38.901Z · score: 5 (16 votes)
Consciousness 2010-01-08T12:18:39.776Z · score: 4 (54 votes)
How to think like a quantum monadologist 2009-10-15T09:37:33.643Z · score: -15 (36 votes)
How to get that Friendly Singularity: a minority view 2009-10-10T10:56:46.960Z · score: 12 (27 votes)
Why Many-Worlds Is Not The Rationally Favored Interpretation 2009-09-29T05:22:48.366Z · score: 8 (39 votes)


Comment by mitchell_porter on Formal Metaethics and Metasemantics for AI Alignment · 2019-10-10T06:54:12.887Z · score: 3 (2 votes) · LW · GW

"If you or anyone else could point to a specific function in my code that we don't know how to compute, I'd be very interested to hear that."

From the comments in main():

"Given a set of brain models, associate them with the decision algorithms they implement."

"Then map each brain to its rational self's values (understood extensionally i.e. cashing out the meaning of their mental concepts in terms of the world events they refer to)."

Are you assuming that you have whole brain emulations of a few mature human beings? And then the "decision algorithms" and "rational... values" are defined in terms of how those emulations respond to various sequences of inputs?

Comment by mitchell_porter on Formal Metaethics and Metasemantics for AI Alignment · 2019-10-09T23:16:48.466Z · score: 5 (3 votes) · LW · GW

This looks like important work. Like Gordon, upon closer examination, I do expect to find functions in your code that are tasked with carrying out computations that we don't know how to do, or which may even be unfeasible in their present form - e.g. "map each brain to its rational self's values". Great concept, but how many future scientific breakthroughs will we need, before we'll know how to do that?

Nonetheless, even a schema for friendly AI has great value. It's certainly progress beyond 2006. :-)

Comment by mitchell_porter on Interview with Aella, Part I · 2019-09-20T00:29:46.426Z · score: 15 (12 votes) · LW · GW

Why is this person interesting or important?

Comment by mitchell_porter on What happened to Leverage Research? · 2019-09-03T10:21:10.562Z · score: 5 (3 votes) · LW · GW

I was surprised to see Leverage mentioned, in the recent article "Leaked Emails Show How White Nationalists Have Infiltrated Conservative Media". This is one of those exposés that gets 15 minutes of fame on political Twitter. If I am reading it correctly, one of the protagonists starts at Tucker Carlson's Daily Caller, then founds a neoreactionary webzine, and later joins Leverage.

Comment by mitchell_porter on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T03:44:59.492Z · score: 3 (2 votes) · LW · GW

Before this, the only other low-budget method of planetary cooling that I knew, was dumping sulfate aerosols in the upper atmosphere (from rocket or balloon), in imitation of volcanic eruptions and dirty coal burning. The tactics have two things in common. One is that their immediate effects are regional rather than global, the other is that their effects quickly get washed out unless you keep pumping.

Carbon dioxide disperses through the atmosphere in a relatively homogeneous way. But these much larger particles will remain concentrated at particular altitudes and latitudes. So certainly when and where they are released will need to be carefully chosen.

As for the short lifespan of the particles, again it contrasts with carbon dioxide. Once a carbon dioxide excess is created, it will sit there for decades, possibly centuries. There will be turnover due to the biological carbon cycle, but a net reduction, in the form of uptake by natural carbon sinks, is a very slow process.

The carbon dioxide sits there and traps heat, and the sulfate aerosols or water droplets only alleviate this by reflecting sunlight and thus reducing the amount of energy that gets trapped. So the moment you stop launching sulfate rockets or turn off your seawater vaporizers, the full greenhouse heat will swiftly return.

That's why extracting and sequestering atmospheric carbon is a much more permanent solution, but it is extremely energy-expensive, e.g. you can crack open certain minerals and CO2 will bond to the exposed surface, but it takes a lot of energy to mine, pulverize, and distribute enough of the resulting powder to make a difference. Some kind of nanotechnology could surely do it, but that would be a cusp-of-singularity technology anyway. So there's a reasonable chance that some of these low-cost mitigation methods will begin to be deployed, some time before singularity puts an end to the Anthropocene.

Comment by mitchell_porter on Causal Reality vs Social Reality · 2019-06-25T03:29:47.064Z · score: 11 (4 votes) · LW · GW

To my mind, this is too vague an explanation. Why is it that far more people believe in fighting global warming than in fighting the ageing process? They both rest upon scientific premises. You may say that the causal thinkers interested in fighting global warming, managed to bring lots of social thinkers along with them, by using social mechanisms; but why did the anti-warmers manage that, when the anti-agers did not? Also, even if we just focus on causal thinkers, it's far more common to deplore global warming than it is to deplore the ageing process.

Comment by mitchell_porter on [Answer] Why wasn't science invented in China? · 2019-04-24T03:12:26.823Z · score: 10 (9 votes) · LW · GW

Count me as one of those who regards the question as dubious. At various points in this essay, the thing that was to be invented becomes "*modern* science" or "scientific *method*". China always had plenty of people who wanted to know the truth, who devised systematic models of the world, and who managed to discover things. Out of human civilizations, Europe certainly hit the scientific jackpot, in the sense that numerous developments came tumbling out of Pandora's box together. But the spirit of inquiry had already existed in many times and places.

Also, I would like to see an investigation like this, directed at answering the question: Why didn't Less Wrong (or MIRI) invent deep learning?

Comment by mitchell_porter on What is Driving the Continental Drift? · 2019-04-20T09:44:09.651Z · score: 2 (1 votes) · LW · GW

So if I have understood the gist of your theory, it is that continental drift is not driven by mantle convection (hot rock rising, cold rock sinking), but by giant magma streams which are somehow coupled to, or even driven by, the rapidly rotating core?

Comment by mitchell_porter on A Case for Taking Over the World--Or Not. · 2019-04-14T08:05:36.114Z · score: 8 (4 votes) · LW · GW

Previous LW discussions on taking over the world (last updated in 2013).

Comments of mine on "utopian hope versus reality" (dating from 2012).

Since that era, a few things have happened.

First change: LW is not quite the point of focus that it was. There was a rationalist diaspora into social media, and "Slate Star Codex" (and its associated subreddits?) became a more prominent locus of rationalist discussion. The most important "LW-ish" forums that I know about now, might be those which focus on quasi-technical discussion of AI issues like "alignment". I call them the most important because of...

Second change: The era of deep learning, and of commercialized AI in the guise of "machine learning", arrived. The fact that these algorithms are not limited to the resources of a single computer, but can in principle tap the resources of an entire data center or even the entire cloud of a major tech corporation, means that we have also arrived at the final stage of the race towards superintelligence.

In the past, taking over the world meant building or taking over the strongest superpower. Now it simply means being the first to create strongly superhuman intelligence; and saving the world means identifying a value system that will make an autonomous AI "friendly", and working to ensure that the winner of the mind race is guided by friendly rather than unfriendly values. Every other concern is temporary, and any good work done towards other causes, will potentially be undone by unfriendly AI, if unfriendly values win the AI race.

(I do not say with 100% certainty that this is the nature of the world, but this scenario has sufficient internal logic that, if it does not apply to reality, there must be some other factor which somehow overrides it.)

Comment by mitchell_porter on Life can be better than you think · 2019-01-25T16:28:52.172Z · score: 2 (1 votes) · LW · GW

People like Schopenhauer and Benatar are just being realistic. Reality includes futility and horror on enormous scales. Perhaps the remaking of Earth by superhuman AI offers an imminent chance that even this can change, but it's just a chance.

Comment by mitchell_porter on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-09T02:23:03.667Z · score: 2 (1 votes) · LW · GW

So what is he saying? We never need to solve the problem of designing a human-friendly superintelligent agent?

Comment by mitchell_porter on Boltzmann Brains, Simulations and self refuting hypothesis · 2018-11-28T02:29:49.973Z · score: 2 (1 votes) · LW · GW

This is the reverse of the usual argument that we should not believe we are going to have a googol descendants. Usually one says: to be living at the beginning of time means that you belong to a very special minority, therefore it would take more indexical information to single you out, compared to someone from the middle of history.

Comment by mitchell_porter on Quantum theory cannot consistently describe the use of itself · 2018-09-21T03:25:29.340Z · score: 6 (4 votes) · LW · GW

The thought experiment involves observers being in a coherent superposition. But I'm not now 100% sure that it involves actual quantum erasure, I was relying on other people's description. I'm hoping this will be cleared up without having to plough through the paper myself.

Anyway, LW may appreciate this analysis which actually quotes HPMOR.

Comment by mitchell_porter on Open Thread September 2018 · 2018-09-19T23:48:59.478Z · score: 7 (3 votes) · LW · GW

It's a minor new quantum thought experiment which, as often happens, is being used to promote dumb sensational views about the meaning or implications of quantum mechanics. There's a kind of two-observer entangled system (as in "Hardy's paradox"), and then they say, let's also quantum-erase or recohere one of the observers so that there is no trace of their measurement ever having occurred, and then they get some kind of contradictory expectations with respect to the measurements of the two observers.

Undoing a quantum measurement in the way they propose is akin to squirting perfume from a bottle, then smelling it, and then having all the molecules in the air happening to knock all the perfume molecules back into the bottle, and fluctuations in your brain erasing the memory of the smell. Classically that's possible but utterly unlikely, and exactly the same may be said of undoing a macroscopic quantum measurement, which requires the decohered branches of the wavefunction (corresponding to different measurement outcomes) to then separately evolve so as to converge on the same state and recohere.

Without even analyzing anything in detail, it is hardly surprising that if an observer is subjected to such a highly artificial process, designed to undo a physical event in its totality, then the observer's inferences are going to be skewed somehow. So, you do all this and the observers differ in their quantum predictions somehow. In their first interpretation (2016), Frauchiger and Renner said that this proves many worlds. Now (2018), they say it proves that quantum mechanics can't describe itself. Maybe if they try a third time, they'll hit on the idea that one of the observers is just wrong.

Comment by mitchell_porter on One night, without sleep · 2018-08-24T09:06:01.301Z · score: 2 (1 votes) · LW · GW

Migraine is just an occasional problem. Living and working conditions are the truly chronic problem that have made me irrelevant.

Comment by mitchell_porter on One night, without sleep · 2018-08-19T08:33:57.817Z · score: 3 (2 votes) · LW · GW

Thanks for a response. I am actually most concerned about the things that I could be doing, that I don't see anyone else doing, and which aren't being done because I am operating at far below my potential. In my case, I think illness is very much just a symptom of the struggle to get on with things in an interfering environment.

The most ambitious thing that I can think of attempting, is to solve the AI value alignment problem in time for Earth's singularity. After this bout of sickness, and several days of dawdling while I waited to recover, I somehow have a new tactic for approaching the problem (it's more a personal tactic for engaging with the problem, than an idea for a solution). I hate the idea that this kind of experience is the price I pay for really pushing ahead, but it may be so.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T13:55:26.785Z · score: 2 (1 votes) · LW · GW

Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it's to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T09:18:22.834Z · score: 2 (1 votes) · LW · GW

"Organization working on AI" vs "any other kind of organization" is not the important point. The important point is ALL. We are talking about a hypothetical organization capable of shutting down ALL artificial intelligence projects that it does not like, no matter where on earth they are. Alicorn kindly gives us an example of what she's talking about: "destroy all the GPUs on the planet and prevent the manufacture of new ones".

Just consider China, Russia, and America. China and America lead everyone else in machine learning; Russia has plenty of human capital and has carefully preserved its ability to not be pushed around by America. What do you envisage - the three of them agree to establish a single research entity, that shall be the only one in the world working on AI near a singularity threshold, and they agree not to have any domestic projects independent of this joint research group, and they agree to work to suppress rival groups throughout the world?

Despite your remarks about how the NSA could easily become the hub of a surveillance state tailored to this purpose, I greatly doubt the ability of NSA++ to successfully suppress all rival AI work even within America and throughout the American sphere of influence. They could try, they could have limited success - or they could run up against the limits of their power. Tech companies, rival agencies, coalitions of university researchers, other governments, they can all join forces to interfere.

In my opinion, the most constructive approach to the fact that there are necessarily multiple contenders in the race towards superhuman intelligence, is to seek intellectual consensus on important points. The technicians who maintain the world's nuclear arsenals agree on the basics of nuclear physics. The programmers who maintain the world's search engines agree on numerous aspects of the theory of algorithms. My objective here would be that the people who are working in proximity to the creation of superhuman intelligence, develop some shared technical understandings about the potential consequences of what they are doing, and about the initial conditions likely to produce a desirable rather than an undesirable outcome.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T01:04:19.622Z · score: 2 (1 votes) · LW · GW

A great power can think about doing such things against an opponent. But I thought we were talking about a scenario in which some AI clique has halted *all* rival AI projects throughout the entire world, effectively functioning like a totalitarian world government, but without having actually crossed the threshold of superintelligence. That is what I am calling a fantasy.

The world has more than one great power, great powers are sovereign within their own territory, and you are not going to overcome that independence by force, short of a singularity. The rest of the world will never be made to stop, just so that one AI team can concentrate on solving the problems of alignment without having to look over its shoulder at the competition.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-15T05:38:34.306Z · score: 2 (1 votes) · LW · GW

How are you going to stop a rival nuclear-armed state from doing whatever it wants on its own territory?

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-14T04:01:30.080Z · score: 2 (1 votes) · LW · GW

Can someone in China halt AI research at Google and Amazon? Can someone in America halt AI research at Tencent and Baidu? Could the NSA halt unapproved AI research just throughout America?

By a singularity I mean creation of superhuman intelligence that nothing in the world can resist.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-12T13:24:30.303Z · score: 3 (2 votes) · LW · GW

My opinion: The capacity to forcibly halt all rival AI projects, is only to be expected in an AI project that has already produced a singularity. It is not a viable tactic if you are aiming to create a friendly singularity. In that case, there is no alternative to solving the problems of friendly values and value stability, and either reaching singularity first, or influencing those who will get there before you.

Comment by mitchell_porter on Why it took so long to do the Fermi calculation right? · 2018-07-02T23:22:27.215Z · score: 10 (7 votes) · LW · GW

Doesn't this paper boil down to "Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so those probabilities must be small after all?"

Comment by mitchell_porter on Weak arguments against the universal prior being malign · 2018-06-15T03:31:32.569Z · score: 8 (2 votes) · LW · GW

I guess it makes sense, given enough assumptions. There's a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other's reasoning, and act accordingly.

I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the "multiverse level". Using the language of programs: if we consider some set of programs that *hasn't* been selected precisely so that they will engage in acausal coordination - perhaps the set of *all* well-formed programs in some very simple programming language - what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are "trading" with nonexistent partners, and so on.

Comment by mitchell_porter on Weak arguments against the universal prior being malign · 2018-06-15T00:23:44.626Z · score: 9 (2 votes) · LW · GW

Has anyone ever actually presented an argument for such propositions? Like describing an ensemble of toy possible worlds in which even attempting "acausal trade" is rational, let alone one in which these acausal coalitions of acausal traders exist?

It might makes some sense to identify with all your subjective duplicates throughout the (hypothetical) multiverse, on the grounds that some fraction of them will engage in the same decision process, so that how you decide here is actually how a whole sub-ensemble of "you"s will decide.

But acausal trade, as I understand it, involves simulating a hypothetical other entity, who by hypothesis is simulating *you* in their possible world, so as to artificially create a situation in which two ensemble-identified entities can interact with each other.

I mean... Do you, in this world, have to simulate not just the other entity, but also simulate its simulation of you?? So that there is now a simulation of you in *this* world? Or is that a detail you can leave out? Or do you, the original you, roleplay the simulation? Someone show me a version of this that actually makes sense.

Comment by mitchell_porter on Today a Tragedy · 2018-06-13T11:34:51.606Z · score: 26 (8 votes) · LW · GW

A long time ago, something similar happened to me. I was an immortalist, I was living the struggle, I got the call that someone had died. I remember thinking, OK, now I have to bring back the dead as well.

Good luck preserving your intentions and your functionality. Hopefully there are people around you in real life who at least half understand and half sympathize with your response to the situation.

Actions can have unexpected consequences. When you started your 80-day sprint, I started my own. Good luck between now and the end of July.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-10T10:29:45.863Z · score: 1 (1 votes) · LW · GW

The fact that I was basically serious, and in no way attempting to discourage @elriggs, and yet the comment is (after 12 hours) at -17, suggests that LW now has a problem with people who *do* want to do advanced things.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-09T23:26:13.298Z · score: 2 (3 votes) · LW · GW

I do mean it rather seriously. There are theoretical frameworks already in play, directed at solving the two major subproblems that you identify, i.e. creating raw superintelligence, and (let's say) identifying a friendly value system. I actually find it conceivable that the required breakthroughs are not far away, in the same way that e.g. imminent solutions to math's Millennium Problems are conceivable - someone just needs to have the right insights.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-09T22:34:56.057Z · score: -10 (10 votes) · LW · GW

An exercise for those who are a little more advanced: *actually* save the world within the next 80 days. In the context of AI safety at the singularity level, that would mean completely figuring out the theory required to have a friendly singularity, and then making it happen for real, by the end of July.

Comment by mitchell_porter on Weird question: could we see distant aliens? · 2018-04-21T03:07:16.970Z · score: 6 (2 votes) · LW · GW

I'm thinking large numbers of synchronized reusable beacons - either recurrent novas or black holes - where a flash is produced by feeding the beacon with gas in a controlled way. For rapid reuse, you want local byproducts of the flash to get out of the way quickly, so the next batch of gas can be introduced. That could mean dwarf novas, or black hole processes in which the waste comes out in tightly focused jets.

There is a "remarkable recurrent nova" in the Andromeda Galaxy, which repeats on a timescale of months.

Comment by mitchell_porter on Leaving beta: Voting on moving to · 2018-03-12T22:42:56.143Z · score: 2 (2 votes) · LW · GW

I only have an account on LW 1.0, not LW 2.0. Will my current account still exist after the migration?

Comment by mitchell_porter on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T21:09:59.204Z · score: 0 (0 votes) · LW · GW

Four hours of self-play and it's the strongest in the world. Soon the machines will be parenting us.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-06T12:54:48.094Z · score: 0 (0 votes) · LW · GW

Knowledge of the terrain might be hard to get reliably

Knowing that the world is made of atoms should take an AI a long way.

If these people that develop [AGI] are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I'm concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of "values" and "laws" should govern the choices of intelligent agents with such power.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-01T22:28:48.446Z · score: 0 (0 votes) · LW · GW

And you're the tax collector? Answer the question.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-01T21:19:15.661Z · score: 0 (0 votes) · LW · GW

Just answer the question.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-31T03:50:11.073Z · score: 0 (0 votes) · LW · GW

Wake up! In three days, that AI evolved from knowing nothing, to comprehensively beating an earlier AI which had been trained on a distillation of the best human experience. Do you think there's a force in the world that can stand against that kind of strategic intelligence?

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-29T20:28:11.776Z · score: 0 (0 votes) · LW · GW

How much are you willing to lose?

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-19T21:31:45.389Z · score: 3 (3 votes) · LW · GW

A voice tells me that we're out of time. The future of the world will now be decided at Deep Mind, or by some other group at their level.

Comment by mitchell_porter on [Slashdot] We're Not Living in a Computer Simulation, New Research Shows · 2017-10-03T12:45:51.004Z · score: 0 (0 votes) · LW · GW

EurekAlert mentions the simulation argument, and the page implies that this was a press release from Oxford - even providing a media contact - though I have not found the document on Oxford's own website.

I am also skeptical of what the paper (arxiv) is actually saying, on a technical level. It reminds me of another paper a few months ago, which was hyped as exhibiting a "gravitational anomaly" in a condensed-matter system. From all that I could make out, there was no actual gravitational effect involved, only a formal analogy.

This paper seems to engage in exactly the same equivocation, now with the objective of proving something about computational complexity. But I'll have to study it in more detail to be sure.

Comment by mitchell_porter on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-09-07T00:08:33.991Z · score: 0 (0 votes) · LW · GW

It sounds like you want some second opinions and rational evaluation regarding your political conclusion - necessity of revolt. OK.

I can think of reasons for and reasons against such a conclusion, but probably you should spell out more of your reasoning first. For example, why will revolt help humanity survive?

Comment by mitchell_porter on Open thread, June. 12 - June. 18, 2017 · 2017-06-18T11:33:53.788Z · score: 2 (2 votes) · LW · GW

I'm going to take a wild guess, and suggest that your attitude towards FAI research, and your experience of CFS, are actually related. I have no idea if this is a standard theory, but in some ways CFS sounds like depression minus the emotion - and that is a characteristic symptom in people who have a purpose they regard as supremely important, who find absolutely no support for their attempt to pursue it, but who continue to regard it as supremely important.

The point being that when something is that important, it's easy to devalue certain aspects of your own difficulties. Yes, running into a blank wall of collective incomprehension and indifference may have been personally shattering; you may be in agony over the way that what you have to do in order to stay alive, interferes with your ability to preserve even the most basic insights that motivate your position ... but it's an indulgence to think about these feelings, because there is an invisible crisis happening that's much more important.

So you just keep grinding away, or you keep crawling through the desert of your life, or you give up completely and are left only with a philosophical perspective that you can talk about but can't act on... I don't know all the permutations. And then at some point it affects your health. I don't want to say that this is solely about emotion, we are chemical beings affected by genetics, nutrition, and pathogens too. But the planes intersect, e.g. through autoimmune disorders or weakened disease resistance.

The core psychological and practical problem is, there's a difficult task - the great purpose, whatever it is - being made more difficult in ways that have no intrinsic connection to the problem, but are solely about lack of support, or even outright interference. And then on top of that, you may also have doubts and meta doubts to deal with - coming from others and from yourself (and some of those doubts may be justified!). Finally, health problems round out the picture.

The one positive in this situation, is that while all those negatives can reinforce each other, positive developments in one area can also carry across to another.

OK, so that's my attempt to reflect back to you, how you sound to me. As for practical matters, I have only one suggestion. You say

he travels to the USA for a few days every couple of months

so I suggest that you at least wait until his next visit, and use that extra time to understand better how all these aspects of your life intersect.

Comment by mitchell_porter on Open thread, June 5 - June 11, 2017 · 2017-06-08T19:10:19.699Z · score: 0 (0 votes) · LW · GW

If ASI-provided immortal life were possible, you would already be living it.

... because if you're somewhere in an infinite sequence, you're more likely to be in the middle than at the beginning.

Comment by mitchell_porter on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-25T09:29:48.951Z · score: 4 (7 votes) · LW · GW

Suppose there's some idea, X, which you think might help to solve a problem, Y. And there's also a dumb version of X, X', which you know doesn't work, but which still has enthusiasts.

And then one day there's a headline: CAN IDEA X SOLVE PROBLEM Y? Only you find out that it's actually X', the dumb version of X, that is being presented to the world as X... and nothing is done to convey the difference between X' and the version of X that actually warrants attention.

That is, more or less, the situation I find myself in, with respect to this article. I wish there were some snappier way to convey the situation, without talking about X and X' and so on, but I haven't found a way to do it.

Problem Y is: explain why quantum mechanics works, without saying that things don't have properties until they are measured, and so on.

Idea X is, these days, usually called Bohmian mechanics. To the Schrodinger equation, which describes the time evolution of the wavefunction of quantum mechanics, it adds a classical equation of motion for the particles, fields, etc. The particles, fields, etc., evolve on a trajectory in state space which follows the probability current in state space, as defined by the Schrodinger equation.

The original version of this idea is due to de Broglie, who proposed that particles are guided by waves. This was called pilot-wave theory, because the wave "pilots" the particle.

Pilot-wave theory was proposed in the very early days of quantum theory, before the significance of entanglement was properly appreciated. The significance of entanglement is that you don't have one wavefunction per particle, you just have one big wavefunction which provides probabilities for joint configurations of particles.

A pilot-wave theory for many particles, in the form that de Broglie originally proposed - one wave per particle - contains no entanglement, and can't reproduce the multi-particle predictions of quantum mechanics, as Bell's theorem and many other theorems show. Bohmian mechanics can reproduce those predictions, because in Bohmian mechanics, the wavefunction that does the piloting is the single, entangled, multi-particle wave used in actual quantum mechanics.

All this is utterly basic knowledge for the people who work on Bohmian mechanics today. But meanwhile, apparently a group of people who work on fluid dynamics, have rediscovered de Broglie's original idea - "wave guiding a particle" - and are now promoting it as a possible explanation of quantum mechanics. They don't seem to care about the theorems proving that you can't get Bell-type correlations without using entangled waves.

So basically, this article describes the second-rate researchers in this field - in this case, people who are doing the equivalent of trying to force the square peg into the round hole - as if they are the intellectual leaders who define it!

Comment by mitchell_porter on Rationality Quotes February 2013 · 2017-04-02T12:34:23.227Z · score: 2 (3 votes) · LW · GW

Ironically, the man Yevtushenko is now dead too; but the world Yevtushenko, asteroid number 4234, lives on.

Comment by mitchell_porter on Open Thread, March. 6 - March 12, 2017 · 2017-03-30T10:10:09.428Z · score: 0 (0 votes) · LW · GW

What about pederasty in ancient Greece, what about sex in all-male prisons... in both those cases, you have men who by current definitions are not gay, but rather bisexual. And in both cases you have recruitment into an existing sexual culture, whether through seduction or coercion.

Human sexuality can clearly assume an enormous variety of forms, and I don't have a unified theory of it. Obviously genes matter, starting with the basic facts of sex determination, and then in a more elusive way, having some effect on sexual dispositions in the mature organism.

And yes, natural selection will be at work. But, in this case it is heavily mediated by culture (which is itself a realm of replicator populations), and it is constrained by the evolvability of the human genome. I continue to think that the existence of nonreproductive sexuality is simply a side effect of our genomic and evolutionary "business model", of leaky sexual dimorphism combined with Turing-complete cognition.

Comment by mitchell_porter on Open Thread, March. 6 - March 12, 2017 · 2017-03-23T19:31:20.217Z · score: 0 (0 votes) · LW · GW

There actually is a known replicator that assists the reproduction of gay phenotypes, but it's a behavior: gay sex! For a recent exposition, see the video that cost "Milo" his job.

Comment by mitchell_porter on Open Thread, March. 6 - March 12, 2017 · 2017-03-21T10:59:22.858Z · score: 0 (0 votes) · LW · GW

I believe in evolution, I just don't believe in the gay germ.

But regardless of belief... I have some questions which I think are fair questions.

Are there any ideas about how and when the gay germ is acquired?

Are there any ideas about its mechanism of action?

If homosexuality has such a huge fitness penalty, why haven't we evolved immunity to the gay germ?

If someone hasn't experienced sex yet, but they already think they are gay, is that because of the gay germ?

Comment by mitchell_porter on Win $1 Million For Your Bright Idea To Fix The World · 2017-03-19T00:12:14.053Z · score: 0 (0 votes) · LW · GW

Give it all to Fabian Tassano.

Comment by mitchell_porter on Open Thread, March. 6 - March 12, 2017 · 2017-03-18T07:27:19.328Z · score: 1 (1 votes) · LW · GW

OK, let's talk about proximate causes and ultimate causes. The proximate causes are whatever leads to the formation of a particular human individual's sexuality. The ultimate causes are whatever it is that brought about the existence of a population of organisms in which a given sexuality is even possible.

My focus has been on proximate causes. I look at the role of fantasy, choice, and culture in shaping what a person seeks and what they can obtain, and the powerful conditioning effect of emotional and sexual reward once obtained, and I see no need at all to posit an extra category of cause, in order to explain the existence of homosexuality. It's just how some people end up satisfying their emotional and sexual drives.

What I had not grasped, is that the idea of the gay germ is being motivated by consideration of ultimate causes - all this talk of fitness penalties and failure to reproduce. I guess I thought Cochran was a science-jock who couldn't imagine being gay, and who therefore sought to explain it as resulting from an intruding perturbation of human nature.

I am frankly not sure how seriously I should take the argument that there has to be (in gwern's words) a "mechanism... to offset the huge fitness penalty". Humanity evolves through sexual selection, right? And there are lots of losers in every generation. Apparently that's part of our evolutionary "business model". Meanwhile I've argued that non-reproducing homosexuality is a human variation that arises quite easily, given our overall cognitive ensemble. So maybe natural selection has neither a clear incentive to eliminate it, nor a clear target to aim at anyway.

Comment by mitchell_porter on Open Thread, March. 6 - March 12, 2017 · 2017-03-13T06:48:33.283Z · score: 0 (0 votes) · LW · GW

My guess is that it's somehow a spandrel of intelligence.