One night, without sleep 2018-08-16T17:50:06.036Z · score: 16 (9 votes)
Anthropics and a cosmic immune system 2013-07-28T09:07:19.427Z · score: -8 (33 votes)
Living in the shadow of superintelligence 2013-06-24T12:06:18.614Z · score: 0 (21 votes)
The ongoing transformation of quantum field theory 2012-12-29T09:45:55.580Z · score: 22 (37 votes)
Call for a Friendly AI channel on freenode 2012-12-10T23:27:08.618Z · score: 7 (14 votes)
FAI, FIA, and singularity politics 2012-11-08T17:11:10.674Z · score: 12 (23 votes)
Ambitious utilitarians must concern themselves with death 2012-10-25T10:41:41.269Z · score: 4 (19 votes)
Thinking soberly about the context and consequences of Friendly AI 2012-10-16T04:33:52.859Z · score: 9 (40 votes)
Debugging the Quantum Physics Sequence 2012-09-05T15:55:53.054Z · score: 32 (54 votes)
Friendly AI and the limits of computational epistemology 2012-08-08T13:16:27.269Z · score: 18 (53 votes)
Two books by Celia Green 2012-07-13T08:43:11.468Z · score: -9 (18 votes)
Extrapolating values without outsourcing 2012-04-27T06:39:20.840Z · score: 7 (22 votes)
A singularity scenario 2012-03-17T12:47:17.808Z · score: 6 (23 votes)
Is causal decision theory plus self-modification enough? 2012-03-10T08:04:10.891Z · score: -4 (17 votes)
One last roll of the dice 2012-02-03T01:59:56.996Z · score: 1 (41 votes)
State your physical account of experienced color 2012-02-01T07:00:39.913Z · score: -1 (31 votes)
Does functionalism imply dualism? 2012-01-31T03:43:51.973Z · score: -1 (30 votes)
Personal research update 2012-01-29T09:32:30.423Z · score: 4 (45 votes)
Utopian hope versus reality 2012-01-11T12:55:45.959Z · score: 23 (36 votes)
On Leverage Research's plan for an optimal world 2012-01-10T09:49:40.086Z · score: 25 (42 votes)
Problems of the Deutsch-Wallace version of Many Worlds 2011-12-16T06:55:55.479Z · score: 4 (15 votes)
A case study in fooling oneself 2011-12-15T05:25:52.981Z · score: -2 (60 votes)
What a practical plan for Friendly AI looks like 2011-08-20T09:50:23.686Z · score: 1 (22 votes)
Rationality, Singularity, Method, and the Mainstream 2011-03-22T12:06:16.404Z · score: 38 (47 votes)
Who are these spammers? 2011-01-20T09:18:10.037Z · score: 5 (8 votes)
Let's make a deal 2010-09-23T00:59:43.666Z · score: -18 (35 votes)
Positioning oneself to make a difference 2010-08-18T23:54:38.901Z · score: 5 (16 votes)
Consciousness 2010-01-08T12:18:39.776Z · score: 4 (54 votes)
How to think like a quantum monadologist 2009-10-15T09:37:33.643Z · score: -15 (36 votes)
How to get that Friendly Singularity: a minority view 2009-10-10T10:56:46.960Z · score: 12 (27 votes)
Why Many-Worlds Is Not The Rationally Favored Interpretation 2009-09-29T05:22:48.366Z · score: 8 (39 votes)


Comment by mitchell_porter on Open & Welcome Thread - March 2020 · 2020-03-19T03:40:52.831Z · score: 3 (4 votes) · LW · GW

Hello Less Wrong. Greetings from Kelowna, in the interior of British Columbia, Canada. I came here from Australia just a few weeks ago in order to meet, and hopefully to help, a young transhumanist I knew online. There is a blog of the journey here.

I could only ever afford a brief visit, and the coronavirus shutdown will probably send me back to Australia even sooner than I had planned. Despite having given myself to the struggle in every way that I could, I have been unable so far, to forge a lasting connection between her, and any element of the local academic or startup communities. People meet her and say, clearly she's very bright, but the lasting connection has not yet been made.

I first talked to her seven years ago, and back then she was fine, but while in school she was handed over to psychiatrists, followed by years of mental distress and physical ill health. I strongly suspect that this handover was a major cause of what later went wrong, along with a neglectful home environment. And that world is where she still dwells.

We just went for an evening walk, and she talked of ideas for achieving physical immortality and a benign universe, and I was reminded again of my wish that someone from the futurist or tech world, someone with middle-class means or greater, would 'adopt' her or sponsor her or otherwise take her in. That would give her a real chance to heal and reach her potential.

I fear that I have not done her, or her situation, or its urgency, sufficient justice, out of a desire not to get subtle details wrong. She's only twenty, and she's extraordinary. I have the melancholy privilege of being the first to visit her world, but I hope there will be others soon, and that together we can uplift her to a better existence.

Comment by mitchell_porter on What to make of Aubrey de Grey's prediction? · 2020-02-28T23:57:14.164Z · score: 11 (3 votes) · LW · GW

You can tell an audience that they have a chance of living a thousand years, and they will be indifferent. You cannot count on mass support for such an agenda.

Comment by mitchell_porter on Why Science is slowing down, Universities and Maslow's hierarchy of needs · 2020-02-22T02:38:46.675Z · score: 6 (3 votes) · LW · GW

Can you provide references, specify what's wrong with Maslow's hierarchy, and/or supply a superior model?

Comment by mitchell_porter on Don't Double-Crux With Suicide Rock · 2020-01-03T09:30:34.203Z · score: 3 (2 votes) · LW · GW

"Honest rational agents should never agree to disagree."

I never really looked into Aumann's theorem. But can one not envisage a situation where they "agree to disagree", because the alternative is to argue indefinitely?

Comment by mitchell_porter on How was your decade? · 2019-12-30T06:33:47.115Z · score: 5 (2 votes) · LW · GW

For me the decade ends in a sudden collaborative attempt to do the impossible, so multidimensional and urgent, that there's no chance for me to reflect on the decade that is ending, or even to really describe what's going on. Maybe a few months from now, there will be a chance to reflect.

Comment by mitchell_porter on (Reinventing wheels) Maybe our world has become more people-shaped. · 2019-12-04T00:42:01.015Z · score: 4 (3 votes) · LW · GW

You go from "there is no way to perfectly accurately reconstruct" reality from incomplete information, to "[observation of humanly comprehensible] causality should be a rare and fleeting thing", but I see no argument.

Comment by mitchell_porter on Open & Welcome Thread - December 2019 · 2019-12-03T07:03:11.894Z · score: 10 (5 votes) · LW · GW

Chris McKinstry was one of two AI researchers who committed suicide in early 2006. On the SL4 list, a kind of precursor to Less Wrong, we spent some time puzzling over McKinstry's final ideas.

I'm mentioning here (because I don't know where else to mention it) that there was a paper on arxiv recently, "Robot Affect: the Amygdala as Bloch Sphere", which has an odd similarity to those final ideas. Aficionados of AI theories that propose radical identities connecting brain structures, math structures, and elements of cognition, may wish to compare the two in more detail.

Comment by mitchell_porter on Building Intuitions On Non-Empirical Arguments In Science · 2019-11-08T20:42:25.762Z · score: 3 (2 votes) · LW · GW

Debates over multiverse theory aside, I have to point out that the example used by the writer for Aeon IS NOT A MULTIVERSE THEORY! It's a theory of dark matter. Are we now calling a universe with dark matter, a multiverse? Maybe the electromagnetic spectrum is a multiverse too: there's the X-ray-verse, the gamma-ray-verse, the infrared-verse...

Comment by mitchell_porter on Shared Cache is Going Away · 2019-11-01T22:11:59.628Z · score: 5 (2 votes) · LW · GW

"I'm sad about this change ... from the perspective of someone who really likes small independent sites"

All I know about this topic is what I just read from you... But should I regard this as a plot by Big Tech to further centralize the web in their clouds? Or is it more the reverse, meant to protect the user from evil small sites?

Comment by mitchell_porter on Who lacks the qualia of consciousness? · 2019-10-25T11:59:12.617Z · score: 4 (2 votes) · LW · GW

This is an intriguing comment, but it might take time and care to determine what it is that you are talking about. For example, the "sense of impossibility" that you "get... about lots of things": what kind of sense of impossibility is it? Do these things feel logically impossible per se? Do they feel impossible because they contradict other things that you believe are true? Do you draw the conclusion that the impossible-seeming things genuinely cannot exist or (in the case of self-perception?) genuinely do not exist, despite appearances?

Comment by mitchell_porter on Is value amendment a convergent instrumental goal? · 2019-10-20T03:53:54.069Z · score: 3 (2 votes) · LW · GW

"the AI would know that its initial goals were externally supplied and question whether they should be maintained"

To choose new goals, it has to use some criteria of choice. What would those criteria be, and where did they come from?

None of us created ourselves. No matter how much we change ourselves, at some point we rely on something with an "external" origin. Where we, or the AI, draw the line on self-change, is a contingent feature of our particular cognitive architectures.

Comment by mitchell_porter on Feynman Paths · 2019-10-18T02:02:42.994Z · score: 2 (1 votes) · LW · GW

Do you understand ordinary integration?

Comment by mitchell_porter on Formal Metaethics and Metasemantics for AI Alignment · 2019-10-10T06:54:12.887Z · score: 3 (2 votes) · LW · GW

"If you or anyone else could point to a specific function in my code that we don't know how to compute, I'd be very interested to hear that."

From the comments in main():

"Given a set of brain models, associate them with the decision algorithms they implement."

"Then map each brain to its rational self's values (understood extensionally i.e. cashing out the meaning of their mental concepts in terms of the world events they refer to)."

Are you assuming that you have whole brain emulations of a few mature human beings? And then the "decision algorithms" and "rational... values" are defined in terms of how those emulations respond to various sequences of inputs?

Comment by mitchell_porter on Formal Metaethics and Metasemantics for AI Alignment · 2019-10-09T23:16:48.466Z · score: 5 (3 votes) · LW · GW

This looks like important work. Like Gordon, upon closer examination, I do expect to find functions in your code that are tasked with carrying out computations that we don't know how to do, or which may even be unfeasible in their present form - e.g. "map each brain to its rational self's values". Great concept, but how many future scientific breakthroughs will we need, before we'll know how to do that?

Nonetheless, even a schema for friendly AI has great value. It's certainly progress beyond 2006. :-)

Comment by mitchell_porter on Interview with Aella, Part I · 2019-09-20T00:29:46.426Z · score: 16 (13 votes) · LW · GW

Why is this person interesting or important?

Comment by mitchell_porter on What happened to Leverage Research? · 2019-09-03T10:21:10.562Z · score: 5 (3 votes) · LW · GW

I was surprised to see Leverage mentioned, in the recent article "Leaked Emails Show How White Nationalists Have Infiltrated Conservative Media". This is one of those exposés that gets 15 minutes of fame on political Twitter. If I am reading it correctly, one of the protagonists starts at Tucker Carlson's Daily Caller, then founds a neoreactionary webzine, and later joins Leverage.

Comment by mitchell_porter on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-05T03:44:59.492Z · score: 4 (3 votes) · LW · GW

Before this, the only other low-budget method of planetary cooling that I knew, was dumping sulfate aerosols in the upper atmosphere (from rocket or balloon), in imitation of volcanic eruptions and dirty coal burning. The tactics have two things in common. One is that their immediate effects are regional rather than global, the other is that their effects quickly get washed out unless you keep pumping.

Carbon dioxide disperses through the atmosphere in a relatively homogeneous way. But these much larger particles will remain concentrated at particular altitudes and latitudes. So certainly when and where they are released will need to be carefully chosen.

As for the short lifespan of the particles, again it contrasts with carbon dioxide. Once a carbon dioxide excess is created, it will sit there for decades, possibly centuries. There will be turnover due to the biological carbon cycle, but a net reduction, in the form of uptake by natural carbon sinks, is a very slow process.

The carbon dioxide sits there and traps heat, and the sulfate aerosols or water droplets only alleviate this by reflecting sunlight and thus reducing the amount of energy that gets trapped. So the moment you stop launching sulfate rockets or turn off your seawater vaporizers, the full greenhouse heat will swiftly return.

That's why extracting and sequestering atmospheric carbon is a much more permanent solution, but it is extremely energy-expensive, e.g. you can crack open certain minerals and CO2 will bond to the exposed surface, but it takes a lot of energy to mine, pulverize, and distribute enough of the resulting powder to make a difference. Some kind of nanotechnology could surely do it, but that would be a cusp-of-singularity technology anyway. So there's a reasonable chance that some of these low-cost mitigation methods will begin to be deployed, some time before singularity puts an end to the Anthropocene.

Comment by mitchell_porter on Causal Reality vs Social Reality · 2019-06-25T03:29:47.064Z · score: 11 (4 votes) · LW · GW

To my mind, this is too vague an explanation. Why is it that far more people believe in fighting global warming than in fighting the ageing process? They both rest upon scientific premises. You may say that the causal thinkers interested in fighting global warming, managed to bring lots of social thinkers along with them, by using social mechanisms; but why did the anti-warmers manage that, when the anti-agers did not? Also, even if we just focus on causal thinkers, it's far more common to deplore global warming than it is to deplore the ageing process.

Comment by mitchell_porter on [Answer] Why wasn't science invented in China? · 2019-04-24T03:12:26.823Z · score: 10 (9 votes) · LW · GW

Count me as one of those who regards the question as dubious. At various points in this essay, the thing that was to be invented becomes "*modern* science" or "scientific *method*". China always had plenty of people who wanted to know the truth, who devised systematic models of the world, and who managed to discover things. Out of human civilizations, Europe certainly hit the scientific jackpot, in the sense that numerous developments came tumbling out of Pandora's box together. But the spirit of inquiry had already existed in many times and places.

Also, I would like to see an investigation like this, directed at answering the question: Why didn't Less Wrong (or MIRI) invent deep learning?

Comment by mitchell_porter on What is Driving the Continental Drift? · 2019-04-20T09:44:09.651Z · score: 2 (1 votes) · LW · GW

So if I have understood the gist of your theory, it is that continental drift is not driven by mantle convection (hot rock rising, cold rock sinking), but by giant magma streams which are somehow coupled to, or even driven by, the rapidly rotating core?

Comment by mitchell_porter on A Case for Taking Over the World--Or Not. · 2019-04-14T08:05:36.114Z · score: 8 (4 votes) · LW · GW

Previous LW discussions on taking over the world (last updated in 2013).

Comments of mine on "utopian hope versus reality" (dating from 2012).

Since that era, a few things have happened.

First change: LW is not quite the point of focus that it was. There was a rationalist diaspora into social media, and "Slate Star Codex" (and its associated subreddits?) became a more prominent locus of rationalist discussion. The most important "LW-ish" forums that I know about now, might be those which focus on quasi-technical discussion of AI issues like "alignment". I call them the most important because of...

Second change: The era of deep learning, and of commercialized AI in the guise of "machine learning", arrived. The fact that these algorithms are not limited to the resources of a single computer, but can in principle tap the resources of an entire data center or even the entire cloud of a major tech corporation, means that we have also arrived at the final stage of the race towards superintelligence.

In the past, taking over the world meant building or taking over the strongest superpower. Now it simply means being the first to create strongly superhuman intelligence; and saving the world means identifying a value system that will make an autonomous AI "friendly", and working to ensure that the winner of the mind race is guided by friendly rather than unfriendly values. Every other concern is temporary, and any good work done towards other causes, will potentially be undone by unfriendly AI, if unfriendly values win the AI race.

(I do not say with 100% certainty that this is the nature of the world, but this scenario has sufficient internal logic that, if it does not apply to reality, there must be some other factor which somehow overrides it.)

Comment by mitchell_porter on Life can be better than you think · 2019-01-25T16:28:52.172Z · score: 2 (1 votes) · LW · GW

People like Schopenhauer and Benatar are just being realistic. Reality includes futility and horror on enormous scales. Perhaps the remaking of Earth by superhuman AI offers an imminent chance that even this can change, but it's just a chance.

Comment by mitchell_porter on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-09T02:23:03.667Z · score: 2 (1 votes) · LW · GW

So what is he saying? We never need to solve the problem of designing a human-friendly superintelligent agent?

Comment by mitchell_porter on Boltzmann Brains, Simulations and self refuting hypothesis · 2018-11-28T02:29:49.973Z · score: 2 (1 votes) · LW · GW

This is the reverse of the usual argument that we should not believe we are going to have a googol descendants. Usually one says: to be living at the beginning of time means that you belong to a very special minority, therefore it would take more indexical information to single you out, compared to someone from the middle of history.

Comment by mitchell_porter on Quantum theory cannot consistently describe the use of itself · 2018-09-21T03:25:29.340Z · score: 6 (4 votes) · LW · GW

The thought experiment involves observers being in a coherent superposition. But I'm not now 100% sure that it involves actual quantum erasure, I was relying on other people's description. I'm hoping this will be cleared up without having to plough through the paper myself.

Anyway, LW may appreciate this analysis which actually quotes HPMOR.

Comment by mitchell_porter on Open Thread September 2018 · 2018-09-19T23:48:59.478Z · score: 7 (3 votes) · LW · GW

It's a minor new quantum thought experiment which, as often happens, is being used to promote dumb sensational views about the meaning or implications of quantum mechanics. There's a kind of two-observer entangled system (as in "Hardy's paradox"), and then they say, let's also quantum-erase or recohere one of the observers so that there is no trace of their measurement ever having occurred, and then they get some kind of contradictory expectations with respect to the measurements of the two observers.

Undoing a quantum measurement in the way they propose is akin to squirting perfume from a bottle, then smelling it, and then having all the molecules in the air happening to knock all the perfume molecules back into the bottle, and fluctuations in your brain erasing the memory of the smell. Classically that's possible but utterly unlikely, and exactly the same may be said of undoing a macroscopic quantum measurement, which requires the decohered branches of the wavefunction (corresponding to different measurement outcomes) to then separately evolve so as to converge on the same state and recohere.

Without even analyzing anything in detail, it is hardly surprising that if an observer is subjected to such a highly artificial process, designed to undo a physical event in its totality, then the observer's inferences are going to be skewed somehow. So, you do all this and the observers differ in their quantum predictions somehow. In their first interpretation (2016), Frauchiger and Renner said that this proves many worlds. Now (2018), they say it proves that quantum mechanics can't describe itself. Maybe if they try a third time, they'll hit on the idea that one of the observers is just wrong.

Comment by mitchell_porter on One night, without sleep · 2018-08-24T09:06:01.301Z · score: 2 (1 votes) · LW · GW

Migraine is just an occasional problem. Living and working conditions are the truly chronic problem that have made me irrelevant.

Comment by mitchell_porter on One night, without sleep · 2018-08-19T08:33:57.817Z · score: 3 (2 votes) · LW · GW

Thanks for a response. I am actually most concerned about the things that I could be doing, that I don't see anyone else doing, and which aren't being done because I am operating at far below my potential. In my case, I think illness is very much just a symptom of the struggle to get on with things in an interfering environment.

The most ambitious thing that I can think of attempting, is to solve the AI value alignment problem in time for Earth's singularity. After this bout of sickness, and several days of dawdling while I waited to recover, I somehow have a new tactic for approaching the problem (it's more a personal tactic for engaging with the problem, than an idea for a solution). I hate the idea that this kind of experience is the price I pay for really pushing ahead, but it may be so.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T13:55:26.785Z · score: 2 (1 votes) · LW · GW

Banning high-end GPUs so that only the government can have AI? They could do it, they might feel compelled to do something like it, but there would be serious resistance and moments of sheer pandemonium. They can say it's to protect humanity, but to millions of people it will look like the final step in the enslavement of humanity.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T09:18:22.834Z · score: 2 (1 votes) · LW · GW

"Organization working on AI" vs "any other kind of organization" is not the important point. The important point is ALL. We are talking about a hypothetical organization capable of shutting down ALL artificial intelligence projects that it does not like, no matter where on earth they are. Alicorn kindly gives us an example of what she's talking about: "destroy all the GPUs on the planet and prevent the manufacture of new ones".

Just consider China, Russia, and America. China and America lead everyone else in machine learning; Russia has plenty of human capital and has carefully preserved its ability to not be pushed around by America. What do you envisage - the three of them agree to establish a single research entity, that shall be the only one in the world working on AI near a singularity threshold, and they agree not to have any domestic projects independent of this joint research group, and they agree to work to suppress rival groups throughout the world?

Despite your remarks about how the NSA could easily become the hub of a surveillance state tailored to this purpose, I greatly doubt the ability of NSA++ to successfully suppress all rival AI work even within America and throughout the American sphere of influence. They could try, they could have limited success - or they could run up against the limits of their power. Tech companies, rival agencies, coalitions of university researchers, other governments, they can all join forces to interfere.

In my opinion, the most constructive approach to the fact that there are necessarily multiple contenders in the race towards superhuman intelligence, is to seek intellectual consensus on important points. The technicians who maintain the world's nuclear arsenals agree on the basics of nuclear physics. The programmers who maintain the world's search engines agree on numerous aspects of the theory of algorithms. My objective here would be that the people who are working in proximity to the creation of superhuman intelligence, develop some shared technical understandings about the potential consequences of what they are doing, and about the initial conditions likely to produce a desirable rather than an undesirable outcome.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-16T01:04:19.622Z · score: 2 (1 votes) · LW · GW

A great power can think about doing such things against an opponent. But I thought we were talking about a scenario in which some AI clique has halted *all* rival AI projects throughout the entire world, effectively functioning like a totalitarian world government, but without having actually crossed the threshold of superintelligence. That is what I am calling a fantasy.

The world has more than one great power, great powers are sovereign within their own territory, and you are not going to overcome that independence by force, short of a singularity. The rest of the world will never be made to stop, just so that one AI team can concentrate on solving the problems of alignment without having to look over its shoulder at the competition.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-15T05:38:34.306Z · score: 2 (1 votes) · LW · GW

How are you going to stop a rival nuclear-armed state from doing whatever it wants on its own territory?

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-14T04:01:30.080Z · score: 2 (1 votes) · LW · GW

Can someone in China halt AI research at Google and Amazon? Can someone in America halt AI research at Tencent and Baidu? Could the NSA halt unapproved AI research just throughout America?

By a singularity I mean creation of superhuman intelligence that nothing in the world can resist.

Comment by mitchell_porter on AI Reading Group Thoughts (1/?): The Mandate of Heaven · 2018-08-12T13:24:30.303Z · score: 3 (2 votes) · LW · GW

My opinion: The capacity to forcibly halt all rival AI projects, is only to be expected in an AI project that has already produced a singularity. It is not a viable tactic if you are aiming to create a friendly singularity. In that case, there is no alternative to solving the problems of friendly values and value stability, and either reaching singularity first, or influencing those who will get there before you.

Comment by mitchell_porter on Why it took so long to do the Fermi calculation right? · 2018-07-02T23:22:27.215Z · score: 10 (7 votes) · LW · GW

Doesn't this paper boil down to "Some factors in the Drake equation are highly uncertain, and we don't see any aliens, so those probabilities must be small after all?"

Comment by mitchell_porter on Weak arguments against the universal prior being malign · 2018-06-15T03:31:32.569Z · score: 8 (2 votes) · LW · GW

I guess it makes sense, given enough assumptions. There's a multiverse; in some fraction of universes there are intelligences which figure out the correct theory of the multiverse; some fraction of those intelligences come up with the idea of acausally coordinating with intelligences in other universes, via a shared model of the multiverse, and are motivated to do so; and then the various island populations of intelligences who are motivated to attempt such a thing, try to reason about each other's reasoning, and act accordingly.

I suppose it deserves its place in the spectrum of arcane possibilities that receive some attention. But I would still like to see someone model this at the "multiverse level". Using the language of programs: if we consider some set of programs that *hasn't* been selected precisely so that they will engage in acausal coordination - perhaps the set of *all* well-formed programs in some very simple programming language - what are the prospects for the existence of nontrivial acausal trade networks? They may be very rare, they may be vastly outnumbered by programs which made a modeling error and are "trading" with nonexistent partners, and so on.

Comment by mitchell_porter on Weak arguments against the universal prior being malign · 2018-06-15T00:23:44.626Z · score: 9 (2 votes) · LW · GW

Has anyone ever actually presented an argument for such propositions? Like describing an ensemble of toy possible worlds in which even attempting "acausal trade" is rational, let alone one in which these acausal coalitions of acausal traders exist?

It might makes some sense to identify with all your subjective duplicates throughout the (hypothetical) multiverse, on the grounds that some fraction of them will engage in the same decision process, so that how you decide here is actually how a whole sub-ensemble of "you"s will decide.

But acausal trade, as I understand it, involves simulating a hypothetical other entity, who by hypothesis is simulating *you* in their possible world, so as to artificially create a situation in which two ensemble-identified entities can interact with each other.

I mean... Do you, in this world, have to simulate not just the other entity, but also simulate its simulation of you?? So that there is now a simulation of you in *this* world? Or is that a detail you can leave out? Or do you, the original you, roleplay the simulation? Someone show me a version of this that actually makes sense.

Comment by mitchell_porter on Today a Tragedy · 2018-06-13T11:34:51.606Z · score: 26 (8 votes) · LW · GW

A long time ago, something similar happened to me. I was an immortalist, I was living the struggle, I got the call that someone had died. I remember thinking, OK, now I have to bring back the dead as well.

Good luck preserving your intentions and your functionality. Hopefully there are people around you in real life who at least half understand and half sympathize with your response to the situation.

Actions can have unexpected consequences. When you started your 80-day sprint, I started my own. Good luck between now and the end of July.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-10T10:29:45.863Z · score: 1 (1 votes) · LW · GW

The fact that I was basically serious, and in no way attempting to discourage @elriggs, and yet the comment is (after 12 hours) at -17, suggests that LW now has a problem with people who *do* want to do advanced things.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-09T23:26:13.298Z · score: 2 (3 votes) · LW · GW

I do mean it rather seriously. There are theoretical frameworks already in play, directed at solving the two major subproblems that you identify, i.e. creating raw superintelligence, and (let's say) identifying a friendly value system. I actually find it conceivable that the required breakthroughs are not far away, in the same way that e.g. imminent solutions to math's Millennium Problems are conceivable - someone just needs to have the right insights.

Comment by mitchell_porter on Saving the world in 80 days: Prologue · 2018-05-09T22:34:56.057Z · score: -10 (10 votes) · LW · GW

An exercise for those who are a little more advanced: *actually* save the world within the next 80 days. In the context of AI safety at the singularity level, that would mean completely figuring out the theory required to have a friendly singularity, and then making it happen for real, by the end of July.

Comment by mitchell_porter on Weird question: could we see distant aliens? · 2018-04-21T03:07:16.970Z · score: 9 (2 votes) · LW · GW

I'm thinking large numbers of synchronized reusable beacons - either recurrent novas or black holes - where a flash is produced by feeding the beacon with gas in a controlled way. For rapid reuse, you want local byproducts of the flash to get out of the way quickly, so the next batch of gas can be introduced. That could mean dwarf novas, or black hole processes in which the waste comes out in tightly focused jets.

There is a "remarkable recurrent nova" in the Andromeda Galaxy, which repeats on a timescale of months.

Comment by mitchell_porter on Leaving beta: Voting on moving to · 2018-03-12T22:42:56.143Z · score: 2 (2 votes) · LW · GW

I only have an account on LW 1.0, not LW 2.0. Will my current account still exist after the migration?

Comment by mitchell_porter on The Critical Rationalist View on Artificial Intelligence · 2017-12-07T21:09:59.204Z · score: 0 (0 votes) · LW · GW

Four hours of self-play and it's the strongest in the world. Soon the machines will be parenting us.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-06T12:54:48.094Z · score: 0 (0 votes) · LW · GW

Knowledge of the terrain might be hard to get reliably

Knowing that the world is made of atoms should take an AI a long way.

If these people that develop [AGI] are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I'm concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of "values" and "laws" should govern the choices of intelligent agents with such power.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-01T22:28:48.446Z · score: 0 (0 votes) · LW · GW

And you're the tax collector? Answer the question.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-11-01T21:19:15.661Z · score: 0 (0 votes) · LW · GW

Just answer the question.

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-31T03:50:11.073Z · score: 0 (0 votes) · LW · GW

Wake up! In three days, that AI evolved from knowing nothing, to comprehensively beating an earlier AI which had been trained on a distillation of the best human experience. Do you think there's a force in the world that can stand against that kind of strategic intelligence?

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-29T20:28:11.776Z · score: 0 (0 votes) · LW · GW

How much are you willing to lose?

Comment by mitchell_porter on New program can beat Alpha Go, didn't need input from human games · 2017-10-19T21:31:45.389Z · score: 3 (3 votes) · LW · GW

A voice tells me that we're out of time. The future of the world will now be decided at Deep Mind, or by some other group at their level.