Posts

Alpha Go Zero comments 2017-10-19T01:27:40.306Z
Intelligence risk and distance to endgame 2012-04-13T09:16:05.486Z

Comments

Comment by Kyre on Ilya Sutskever created a new AGI startup · 2024-06-21T04:11:38.672Z · LW · GW

I’m worried about this cracked team.

Comment by Kyre on Prisoners' Dilemma with Costs to Modeling · 2018-06-10T09:04:21.055Z · LW · GW

Very nice. This is the cleanest result on cognitive (or rationality) costs in co-operative systems that I've seen. Modal combat seems kind of esoteric compared to, say, iterated prisoners' dilemma tournaments with memory, but it pays off nicely here. It gives you the outcomes of a set of other-modelling agents (without e.g. doing a whole lot of simulation), and the box-operator depth then plugs in as a natural modelling-cost measure.

Did you ever publish any of your modal combat code (I have a vague recollection that you had some Haskell code ?) ?

Comment by Kyre on Thought experiment: coarse-grained VR utopia · 2017-06-15T05:07:38.964Z · LW · GW

Don't humans have to give up on doing their own science then (at least fundamental physics) ?

I guess I can have the FAI make me a safe "real physics box" to play with inside the system; something that emulates what it finds out about real physics.

Comment by Kyre on Becoming a Better Community · 2017-06-08T07:41:26.489Z · LW · GW

If you failed you'd want to distinguish between (a) rationalism sucking, (b) your rationalism sucking, or (c) EVE already being full of rationalists.

Whether or not success in Eve is relevant outside Eve is debatable, but I think the complexity, politics and intense competition means that it would be hard to find a better online proving ground.

Comment by Kyre on Inbox zero - A guide - v2 (Instrumental behaviour) · 2017-03-14T05:31:07.509Z · LW · GW

Good advice, but I would go further. Don't use your inbox as a to-do list at all. I maintain a separate to-do list for roughly three reasons.

(1) You can't have your inbox in chronological and priority order. Keeping an inbox and email folders in chronological order is good for searching and keeping track of email conversations.

(2) Possibly just my own psychological quirk, but inbox emails feel like someone waiting for me and getting impatient. I can't seem to get away from my inbox fundamentally representing a communications channel with people on the other end. Watching me.

(3) When I "do email", I know I'm done when I have literally inbox zero, and I get the satisfaction of that several times a day.

I have found that I need scrupulous email and task accounting though. Every email gets deleted (and that advice on unsubscribing is good), or handled right away (within say 2 minutes), or gets a task on a to-do list and the email goes into a subject folder for when it comes to be dealt with.

Comment by Kyre on Should you share your goals · 2016-12-16T06:27:44.844Z · LW · GW

Not just the environment in which you share your goals, but also how you suspect you will react to the responses you get.

When reading through these two scenarios, I can just as easily imagine someone reacting in exactly the opposite way. That is, in the first case, thinking "gosh, I didn't know I had so many supportive friends", "I'd better not let them down", and generally getting a self-reinforcing high when making progress.

Conversely, say phase 1 had failed and got the responses stated above. I can imagine someone thinking "hey my friends are a bunch of jerks" and "they're right, I'm probably going to fail again", and then developing a flinch thinking about weight loss, and losing interest in trying.

Comment by Kyre on Measuring the Sanity Waterline · 2016-12-08T03:51:34.432Z · LW · GW

My five minutes thoughts worth.

Metrics that might useful (on the grounds that in hindsight people would say that they made bad decisions): traffic accident rate, deaths due to smoking, bankruptcy rates, consumer debt levels.

Experiments you could do if you could randomly sample people and get enough of their attention: simple reasoning tests (e.g. confirmation bias), getting people to make some concrete predictions and following them up a year later.

Maybe something measuring people's level of surprise at real vs fake facebook news (on the grounds people should be more surprised at fake news) ?

Comment by Kyre on My problems with Formal Friendly Artificial Intelligence work · 2016-12-07T04:45:34.660Z · LW · GW

Doing theoretical research that ignores practicalities is sometimes turns out to be valuable in practice. It can open a door to something you assumed to be impossible; or save a lot of wasted effort on a plan that turns out to have an impossible sub-problem.

A concrete example of first category might be something like quantum error correcting codes. Prior to that theoretical work, a lot of people thought that quantum computers were not worth pursuing because noise and decoherence would be an insurmountable problem. Quantum fault tolerance theorems did nothing to help solve the very tough practical problems of building a quantum computer, but it did show people that it might be worth pursuing - and here we are 20 years later closing in on practical quantum computers.

I think source code based decision theory might have something of this flavour. It doesn't address all those practical issues such as how one machine comes to trust that another machine's source code is what it says. That might indeed scupper the whole thing. But it does clarify where the theoretical boundaries of the problem are.

You might have thought "well, two machines could co-operate if they had identical source code, but that's too restrictive to be practical". But it turns out that you don't need identical source code if you have the source code and can prove things about it. Then you might have though "ok, but those proofs will never work because of non-termination and self-reference" ... and it turns out that that is wrong too.

Theoretical work like this could inform you about what you could hope to achieve if you could solve the practical issues; and conversely what problems are going to come up that you are absolutely going to have to solve.

Comment by Kyre on Non-Fiction Book Reviews · 2016-08-12T04:51:24.545Z · LW · GW

Will second "Good and Real" as worth reading (haven't read any of the others).

Comment by Kyre on Earning money with/for work in AI safety · 2016-07-19T04:01:21.127Z · LW · GW

Maybe translating AI safety literature into Japanese would be a high-value use of your time ?

Comment by Kyre on Rationality when Insulated from Evidence · 2016-06-30T05:06:42.492Z · LW · GW

That's true, 20 years wouldn't necessarily bring to light a delayed effect.

However the GMO case is interesting because we have in effect a massive scale natural experiment, where hundreds of millions of people on one continent have eaten lots of GMO food while hundreds of millions on another continent have eaten very little, over a period of 10-15 years. There is also a highly motivated group of people who bring to the public attention even the smallest evidence of harm from GMOs.

While I don't rule out a harmful long-term effect, GMOs are a long way down on my list of things to worry about, and dropping further over time.

Comment by Kyre on Does immortality imply eternal existence in linear time? · 2016-04-26T05:42:13.850Z · LW · GW

Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig's Lemma applied :-)

If I were to try to make a more serious argument it would go something like this.

Defining identity, whether two entities are 'the same person' is hard. People have different intuitions. But most people would say that 'your mind now' and 'your mind a few moments later' are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than 'observer moments') with outgoing edges leading to mind states a few moments later.

That is kind of what I meant by "moment-by-moment" identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven't thought about whether its a good one.

In the ordinary course of events these graphs aren't very interesting, they're just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.

So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.

Also, you could have two mind states that lead to the same successor mind state - for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.

(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier - trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you'd need very carefully apply selective amnesia to achieve it.)

So, if we say "immortality" is having an identity graph with an infinite number of mind-states all connected through the "moment-by-moment identity" relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore "eternal existence in linear time".

Rather contrived, I know.

Comment by Kyre on Does immortality imply eternal existence in linear time? · 2016-04-20T04:46:08.193Z · LW · GW

If we take "immortality" to mean "infinitely many distinct observer moments that are connect to me through moment-to-moment identity", then yes, by Konig's Lemma.

(Every infinite graph with finite-degree verticies has an infinite path)

(edit: hmmm, does many-worlds give you infinite-branching into distinct observer moments ?)

Comment by Kyre on LINK: Videogame with a very detailed simulated universe · 2016-02-19T05:09:33.332Z · LW · GW

Procedural universes seemed to see a real resurgence from around 2014, with e.g. Elite Dangerous, No Man's Sky, and a quite a few others that have popped up since.

I love a beautiful procedural world, but I think things will get more interesting when games appear with procedural plot structures that are cohesive and reactive.

Then multiplayer versions will appear that weave all player actions into the plot, and those games will suck people in and never let go.

Comment by Kyre on Consciousness and Sleep · 2016-01-12T07:47:48.745Z · LW · GW

For 5 minutes suspension versus dreamless deep sleep - almost exactly the same person. For 3 hours dreamless deep sleep I'm not so sure. I think my brain does something to change state while I'm deep asleep, even if I don't consciously experience or remember anything. Have you ever woken up feeling different about something, or with a solution to a problem you were thinking about as you dropped off ? If that's not all due to dreaming, then you must be evolving at least slightly while completely unconscious.

Comment by Kyre on Your transhuman copy is of questionable value to your meat self. · 2016-01-07T17:09:29.048Z · LW · GW

Would a slow cell by cell, or thought by thought / byte by byte, transfer of my mind to another medium: one at a time every new neural action potential is received by a parallel processing medium which takes over? I want to say the resulting transfer would be the same consciousness as is typing this but then what if the same slow process were done to make a copy and not a transfer? Once a consciousness is virtual, is every transfer from one medium or location to another not essentially a copy and therefore representing a death of the originating version?

I would follow this line of questioning. For example, say someone does an incremental copy process to you, but the consciousness generated does not know whether or not the original biological consciousness has been destroyed, and has to choose which one to keep. If it chooses the biological one and the biology has been destroyed, bad luck you are definitely gone. What does your consciousness, running either just on the silicon, or identically on the silicon and in the biology, choose ?

Let's say you are informed that there is 1% chance that the biological version has been destroyed. Well, you're almost certainly fine then, you keep the biological version, the silicon version is destroyed, and you live happily ever after until you become senile and die.

On the other hand, say you are informed that the biological version has definitely been destroyed. On your current theory, this means that that the consciousness realises that it has been mistaken about its identity, and is actually only a few minutes old. It's sad that the progenitor person is gone, but it is not suicidal, so it chooses the silicon version.

At what point on the 1% to 100% slider would your consciousness choose the silicon version ?

(Hearing the though-experiment of incremental transfer (or alternatively duplication) was one of the things that changed my mind to pattern-identity from some sort of continuity-identity theory. I remember hearing an interview with Marvin Minsky where he described an incremental transfer on a radio program).

Comment by Kyre on This year's biggest scientific achievements · 2015-12-14T05:06:16.363Z · LW · GW

Not sure if it's a scientific or engineering achievement, but this Nature letter stuck in my mind:

An aqueous, polymer-based redox-flow battery using non-corrosive, safe, and low-cost materials

Comment by Kyre on Crazy Ideas Thread, December 2015 · 2015-12-07T05:36:51.410Z · LW · GW

Oh, I think I see what you mean. No matter how many or how detailed the simulations you run, if your purpose is to learn something from watching them, then ultimately you are limited by your own ability to observe and process what you see.

Whoever is simulating you only has to run the simulations that you launch to the level of fidelity such that you can't tell if they've taken shortcuts. The deeper the nested simulation people are, the harder it is for you to pay attention to them all, and the coarser their simulations can be.

If you are running simulations to answer psychological questions, that should work. And if you are running simulations to answer physics questions ... why would you fill them with conscious people ?

Of course, if I were specifically trying to crash the simulation that I was in, I might come up with some physical laws that would eat up a lot of processing power to calculate for even one person's local space, but between the limitations of computing as they exist in the base simulation, the difficulty in confirming that these laws have been properly executed in all of their fully-complex glory

I was going to say that if you want to be a pain you could launch some NP hard problems that you can manually verify solutions to with a pencil and paper ... except your simulators control your random-number generators.

Comment by Kyre on Crazy Ideas Thread, December 2015 · 2015-12-05T23:49:45.063Z · LW · GW

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

You're right - branch (2) should be "we don't keep running run more than one". We can launch as many as we like.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

That would buy you some time. If a single-agent simulation is say 10^60 times cheaper than a whole universe (roughly the number of elementary particles in the observable universe ?), then that gives you about 200 doubling generations before those single-agent simulations cost as much as much as a universe.

Unless the space of all practically different possible lives of the agent is actually much smaller ... maybe your choices don't matter that much and you end up playing out a relatively small number or attractor scripts. You might be able to map out that space efficiently with some clever dynamic programming.

Comment by Kyre on Crazy Ideas Thread, December 2015 · 2015-12-05T23:13:45.199Z · LW · GW

That's the unbounded computation case.

Comment by Kyre on Crazy Ideas Thread, December 2015 · 2015-12-05T06:33:22.167Z · LW · GW

It seems like there is a lot of room between "one simulation" and "unbounded computational resources"

Well the point is that if we are running on bounded resources, then the time until it runs out depends very sensitively on how many simulations we (and simulations like us) launch on average. Say that our simulation has a million years allocated to it, and we launch simulations starting a year back from the time when we launch a simulation.

If we don't launch any, we get a million years.

If we launch one, but that one doesn't launch any, we get half a million.

If we launch one, and that one launches one etc, then we get on the order of a thousand years.

If we launch two, and that one launches two etc, then we get on the order of 20 years.

Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean.

True, 'terminates' is probably the wrong word. There's no reason why the simulation would be wiped. It just couldn't continue.

It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given

I'm not sure. I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.

Comment by Kyre on Crazy Ideas Thread, December 2015 · 2015-12-04T04:55:42.235Z · LW · GW

Here is a second Simulation Trilemma.

If we are living in a simulation, at least one of the following is true:

1) we are running on a computer with unbounded computational resources, or

2) we will not launch more than one simulation similar to our world, or

3) the simulation we are in will terminate shortly after we launch our own simulations.

Here 'short' is on the order of the period between the era we start the simulation at and when the simulation reaches our stage.

Comment by Kyre on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-10T04:44:58.985Z · LW · GW

I heard strawberry jam can be made with just strawberries, water and sugar on a frying pan on the radio.

I'd use a stove.

Comment by Kyre on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife · 2015-11-03T05:12:01.247Z · LW · GW

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it harms. On the plus side, you might get people thinking about existential risk that otherwise would not have. On the minus side, most people aren't going to understand what you write, and some of the the ones that half-understand it are going to loudly proclaim it as more evidence that MIRI etc are full of insane apocalyptic cultists.

Comment by Kyre on A toy model of the control problem · 2015-09-17T05:42:58.055Z · LW · GW

It's a trade-off. The example is simple enough that the alignment problem is really easy to see, but it also means that it is easy to shrug it off and say "duh, just the use obvious correct utility function for B".

Perhaps you could follow it up with an example with more complex mechanics (and or more complex goal for A) where the bad strategy for B is not so obvious. You then invite the reader to contemplate the difficulty of the alignment problem as the complexity approaches that of the real world.

Comment by Kyre on Crazy Ideas Thread · 2015-07-09T06:23:18.650Z · LW · GW

Nitpick: we have equations for (special) relativistic quantum physics. Dirac was one of the pioneers, and the Standard Model for instance is a relativistic quantum field theory. I presume you mean general relativity (gravity) and quantum mechanics that is the problem.

(Douglas_Knight) Moreover, the predictions that QFT makes about chemistry are too hard. I don't think it is possible with current computers to compute the spectrum of helium, let alone lithium. A quantum computer could do this, though.

In the spirit of what Viliam suggested, maybe you could do computational searches for tractable approximations to QFT for chemistry i.e. automatically find things like density functional theory. A problem there might be that you do not get any insight from the result, and you might end up overfitting.

Comment by Kyre on Stupid Questions June 2015 · 2015-06-02T04:54:05.145Z · LW · GW

Things that are unsexy but I can actually verify as having been useful more than once:

In wallet, folded up tissue. For sudden attack of sniffles (especially on public transport), small cuts, emergency toilet paper.

In bag I carry every day: small pack of tissues, multitool, tiny torch, ibuprofin, pad and pencil, USB charging cable for phone, plastic spork, wet wipe thing from KFC (why do they always shovel multiples of those things in with my order ?).

Comment by Kyre on Visions and Mirages: The Sunk Cost Dilemma · 2015-05-21T05:55:04.331Z · LW · GW

Very rough toy example.

Say I've started a project which I can definitely see 5 days worth of work. I estimate there'll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.

I complete day one but have found another day's work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.

Implicit in my original estimate was a "rate of finding new work" of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let's see, 0.2 is my prior, I should build a model for "rate of finding new work" and figure out what the correct Bayesian update is ... screw it, let's assume I won't find any more work today and estimate the rate by Laplace's rule of succession. My updated rate of finding new work is 0.5. Hmmm that's pretty high, the new work I find is itself going to generate new work, better sum the geometric series ... 5 known days work plus 5 more unknown, so I should finish in 10 days (ie day 11).

I complete day 2 and find another day's work ! Crank the handle around, should finish in 15 days (ie day 17).

... etc ...

If this state of affairs continues, my expected total amount of work grows really fast, and it won't be very long before it becomes clear that it is not profitable.

Contrast this with: I can see 5 days of work, but experience tells me that the total work is about 15 days. The first couple of days I turn up additional work, but I don't start to get worried until around day 3.

Comment by Kyre on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-05-02T14:53:28.584Z · LW · GW

Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.

Comment by Kyre on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-05-01T09:21:36.642Z · LW · GW

Thanks for replying ! Sorry if the bit I quoted was too short and over-simplified.

That does clarify things, although I'm having difficulty understanding what you mean by the phrase "causal structure". I take it you do not mean the physical shape or substance, because you say that a different computer architecture could potentially have the right causal structure.

And I take it you don't mean the cause and effect relationship between parts of the computer that are representing parts of the brain, because I think that can be put into one-to-one correspondence with the cause and effect relationship of the things being represented.

For example, If neuron N1 causes changes to neurons N2, N3 and N4; and I have a simulated S1 causing changes to simulated S2, S3 and S4, then that simulated cause and effect happens by honest-to-god physical cause and effect: voltage levels in the memory gates representing S1 propagate through the architecture to the gates representing S2, S3, S4 causing them to change.

Using a different computer architecture may avert this problem ...

So consciousness would have to then be something that flesh brains and "correctly causally structured" computer hardware have in common, but which is not shared by a simulation of either of those things running on a conventional computer ?

Comment by Kyre on Shawn Mikula on Brain Preservation Protocols and Extensions · 2015-04-29T05:32:47.265Z · LW · GW

That is very interesting; there does seem to be quite rapid progress in this area.

From the blog entry:

... the reason for this is because simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

Can anyone explain what that means ? I can't see how it can be correct.

Comment by Kyre on Self-verification · 2015-04-21T04:35:40.206Z · LW · GW

Tattoo private key on inside of thigh.

Comment by Kyre on Concept Safety: Producing similar AI-human concept spaces · 2015-04-15T05:09:18.320Z · LW · GW

What's to stop the AI from instead learning that "good" and "bad" are just subjective mental states or words from the programmer, rather than some deep natural category of the universe? So instead of doing things it thinks the human programmer would call "good", it just tortures the programmer and forces them to say "good" repeatedly.

The pictures and videos of torture in the training set that are labelled "bad".

It is not perfect, but I think the idea is that with a large and diverse training set the hope is that it alternative models of "good/bad" become extremely contrived, and the human one you are aiming for becomes the simplest model.

I found the material in the post very interesting. It holds out hope that after training your world model, it might not be as opaque as people fear.

Comment by Kyre on Request for Steelman: Non-correspondence concepts of truth · 2015-03-25T05:52:53.040Z · LW · GW

I'm not sure succeeding at number 4 helps you with with the unattractiveness and discomfort of number 3.

Say you do find some alternative steel-manned position on truth that is comfortable and intellectually satisfying. What are the odds that this position will be the same position as that held by "most humans", or that understanding it will help you get along with them ?

Regardless of the concept of truth you arrive at, you're still faced with the challenge of having to interact with people who have not-well-thought-out concepts of truth in a way that is polite, ethical, and (ideally) subtly helpful.

Comment by Kyre on What are the resolution limits of medical imaging? · 2015-01-26T08:32:06.263Z · LW · GW

I thought CLARITY was an interesting development - a brain preservation technique that renders tissue transparent. I imagine in the near future there's likely to be benefits going both was from preservation and imaging research.

Comment by Kyre on The Importance of Sidekicks · 2015-01-09T05:07:42.227Z · LW · GW

Buffy / Xander, Motoko / Batu, Deunan / Briareos

(although I'm not sure "Sidekick" is exactly right here)

Comment by Kyre on Stupid Questions December 2014 · 2014-12-09T05:13:05.269Z · LW · GW

I thought this sounded familiar

Comment by Kyre on xkcd on the AI box experiment · 2014-11-23T23:13:03.132Z · LW · GW

Ah, my mistake, thanks again.

Comment by Kyre on xkcd on the AI box experiment · 2014-11-22T04:08:11.520Z · LW · GW

Downvoted for bad selective quoting in that last quote. I read it and thought, wow, Yudkowsky actually wrote that. Then I thought, hmmm, I wonder if the text right after that says something like "BUT, this would be wrong because ..." ? Then I read user:Document's comment. Thank you for looking that up.

Comment by Kyre on Rationality Quotes October 2014 · 2014-10-04T09:13:03.740Z · LW · GW

I believe this is incorrect. The required proportion of the population that needs to be immune to get a herd immunity effect depends on how infectious the pathogen is. Measles is really infectious with an R0 (number of secondary infections caused by a typical infectious case in a fully susceptible population) of over 10, so you need 90 or 95% vaccination coverage to stop it speading - and why it didn't much of a drop in vaccination before we saw new outbreaks.

R0 estimates for seasonal influenza are around 1.1 or 1.2. Vaccinating 100% of the population with a vaccine with 60% efficacy would give a very large herd immunity effect (toy SIR model I just ran says starting with 40% immune reduces attack rate from 35% to less than 2% for R0 1.2).

(Typo edit)

Comment by Kyre on What's the right way to think about how much to give to charity? · 2014-09-26T05:39:31.699Z · LW · GW

My current rationalisation for my level of charitable giving is "if, say, the wealthiest top billion humans gave as much as me, most of the worlds current problems that can be solved by charity would be solved in short order".

I use this as a labor-saving angst prevention device.

Me: "Am I a good person ? Am I giving too little ? How should I figure out how much to give ? What does my giving reveal about my true preferences ? What would people I admire think of me if they knew ?"

Me: "Extra trillions thing. Get back to work."

Comment by Kyre on The Great Filter is early, or AI is hard · 2014-09-01T05:17:14.806Z · LW · GW

1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.

Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn - Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn't been invented when it was launched).

Pioneers 10 and 11 were launched in 1972 and 73 and remained in contact until 2003 and 1995 respectively, with their failure being due to insufficient power for communication coming from their radioisotope power sources. Pioneer 10 stayed in communication to 80 AU.

New Horizons was launched in 2006 and is still going (encounter with Pluto next year). So, 3 out of 5 probes designed to explore the outer solar system are still going, 2 with 1970s technology.

Comment by Kyre on The immediate real-world uses of Friendly AI research · 2014-08-28T04:35:05.996Z · LW · GW

Thanks, that is interesting.

Comment by Kyre on The immediate real-world uses of Friendly AI research · 2014-08-27T05:44:01.707Z · LW · GW

That's not why it's useful. It's useful because it provides liquidity and reduces the costs of trading.

Absent other people getting their trades completed slightly ahead of you, getting your trades completed in a millisecond instead of a second is that valuable ? I'm not being rhetorical - I know very little about finance. What processes in the rest of the economy are happening fast enough to make millisecond trading worthwhile ?

I would have guessed a failure to solve a co-ordination problem. That is, at one time trades were executed on the timescale of minutes (or maybe even days or weeks once upon a time), and that at every point in time since, there has been a marginal advantage to getting your trades done a little faster than everyone else. At some point the costs of HST outweighed the liquidity benefits but on-one (alone) was in a position to back out without losing - the end result being major engineering projects aimed at shaving milliseconds off network propagation delays, and flash crashes.

I can imaging an alternative universe where, at the point when trade times got down under a second, everyone got together and said "look, this could get silly", and decided to agree that exchanges should collect trades arriving in 1-second buckets and execute them in a randomly permuted order. (Or does something like that not work for some obvious reason ?)

(Also, I would guess that HST does not divert "a good chunk" of the return from other people's investments - if it were more than a sliver, I suspect the co-ordination problem would have got solved.)

Comment by Kyre on [deleted post] 2014-08-14T05:01:54.928Z

I don't know if there's a name for it. In general, consequentialism is over the entire timeline.

Yes, that makes the most sense.

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.

No no, I understand that you're not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.

And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility ... unless "playing god events" have negative utility.

Comment by Kyre on [deleted post] 2014-08-13T05:08:03.235Z

Is there a separate name for "consequentialism over world histories" in comparison to "consequentialism over world states" ?

What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say "don't do it, killing people is bad". Consequentialism over world states would say "do it, utility will increase" (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say "the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don't do it".

Comment by Kyre on [QUESTION]: What are your views on climate change, and how did you form them? · 2014-07-10T05:17:45.295Z · LW · GW

OK now I have to quote this:

Bernard Woolley: What if the Prime Minister insists we help them?

Sir Humphrey Appleby: Then we follow the four-stage strategy.

Bernard Woolley: What's that?

Sir Richard Wharton: Standard Foreign Office response in a time of crisis.

Sir Richard Wharton: In stage one we say nothing is going to happen.

Sir Humphrey Appleby: Stage two, we say something may be about to happen, but we should do nothing about it.

Sir Richard Wharton: In stage three, we say that maybe we should do something about it, but there's nothing we can do.

Sir Humphrey Appleby: Stage four, we say maybe there was something we could have done, but it's too late now.

  • Yes, Minister

(Note I don't mean this is an intentional strategy as applied to climate change, even one operating at a subconscious level within individuals. Given a range of skeptics with different opinions, it could be that different ones are coming into the spotlight so that the "most defensible" argument for doing nothing appears at the appropriate time. I just thought the parallel was funny.)

Comment by Kyre on [QUESTION]: What are your views on climate change, and how did you form them? · 2014-07-09T06:02:38.002Z · LW · GW

Current beliefs on climate change: I would defer to the IPCC.

I would have first came across the subject while I was at school about 25 years ago (probably not at school, or at least only in passing). I think I accepted the idea as plausible based on a basic understanding of the physics and on scientific authority (probably of science popularisers). I don't remember anyone mentioning quantitative warming estimates, or anyone being particularly alarmist or alarmed.

My current views aren't based on detailed investigation. I would say they are based mostly on (a) things on climate change I come across in general scientific reading e.g. I have read most abstracts of articles and letters (and try to struggle through full text of all climate-related ones) in Nature in the last 7 years or so, and more sporadically in the last 20 years; and (b) the Azimuth blog of John Baez et al, who seem genuinely interested in what you can and can't get out of climate models.

One thing I learned on reading your posts (if I got the right impression) is that "respectable skeptics" agree about the C02 increase, that it is human-caused, that there is some warming, that the warming is C02 caused, but they disagree about the magnitude and confidence of future projections. I did not know this.

Comment by Kyre on How do you take notes? · 2014-06-24T05:08:03.401Z · LW · GW

Another text file user. My current system is a log.txt file that has to-do lists at the top, followed by a big list of ideas waiting to be filed, followed by datestamped entries going downwards. That way the next thing to do is right at the top, and I can just cat the date to the bottom to write an entry. I keep this in the home directory on my notebook, but regularly copy it up to my Dropbox. When it gets really long I cut the log part off and save it.

I have another set of files with story ideas, productivity notes, personal thoughts, wildlife sightings, goth sightings, etc, but they've kind of accreted over a long time and aren't well organised, with duplicate files for the same thing. What I'm trying at the moment is to just enter everything in the log, and go through the log every now and again to classify and copy things out.

One problem I have is ubiquity - I love my light-weight notebook and carry it a lot of places, but not everywhere. If I could find a really good phone note-taking app from which it was really really simple to export notes, I might use that.

The other problem is diagrams and photos. I find drawing diagrams and annotating them really helps me think. I have a hardback graph-paper notebook and pencil that I use, but I'd like to have my drawings digitally too, and link images into my log. I'm thinking maybe some sort of markdown-type thing, so I can look at my log with a browser and see the pictures.

Comment by Kyre on Paperclip Maximizer Revisited · 2014-06-19T06:27:28.283Z · LW · GW

Rather flogs a dead horse, but highlights an important difference in perspective. You tell your AI to produce paperclips, and eventually it stops and asks if you would like it to do something different.

You could think "hey, cool, its actually doing friendly stuff I didn't ask for", or you could think "wait ... how would knowing what I really want help it produce more paperclips ... "