Posts

Comments

Comment by wuncidunci on Rationality Quotes April 2014 · 2014-04-08T13:50:24.666Z · LW · GW

Hodges claims that Turing at least had some interest in telepathy and prophesies:

These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one’s ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be the first to go.

Readers might well have wondered whether he[Turing] really believed the evidence to be ‘overwhelming’, or whether this was a rather arch joke. In fact he was certainly impressed at the time by J .B. Rhine’s claims to have experimental proof of extra-sensory perception. It might have reflected his interest in dreams and prophecies and coincidences, but certainly was a case where for him, open-mindedness had to come before anything else; what was so had to come before what it was convenient to think. On the other hand, he could not make light, as less well-informed people could, of the inconsistency of these ideas with the principles of causality embodied in the existing ‘laws of physics’, and so well attested by experiment.

Alan Turing: The Enigma (Chapter 7)

Comment by wuncidunci on Rationality Quotes April 2014 · 2014-04-03T08:26:53.375Z · LW · GW

A video of the whole talk is available here.

Comment by wuncidunci on 2013 Survey Results · 2014-01-23T07:59:43.728Z · LW · GW

Ahh, thank you.

Comment by wuncidunci on 2013 Survey Results · 2014-01-22T22:20:52.329Z · LW · GW

Did you mean Saint Boole?

And whence the blasphemy?

Comment by wuncidunci on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-11T13:32:00.553Z · LW · GW

If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

Comment by wuncidunci on Dr. Jubjub predicts a crisis · 2014-01-11T12:18:35.796Z · LW · GW

Quite possible. I didn't intend for that sentence to come across in a hostile way.

Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern's comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)

Comment by wuncidunci on Dr. Jubjub predicts a crisis · 2014-01-10T20:36:49.269Z · LW · GW

If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn't the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.

Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equations on a computer. Lots of small problems to solve, but no major conceptual ones. We could run into problems related to measuring quantum systems when doing the scanning (I believe Scott Aaronson wrote something about this suspicion lately). Note that this also puts a bound on the level of nano-technology we could have achieve, if we have neuron-sized scanning robots, we would be able to scan a brain and start the Hansonian scenario. Note that this does not preclude slightly larger scale manufacturing technologies, which would probably come from successive miniaturisations of 3d-printers.

Conceptual difficulties creating AGI are more or less expected by everyone around here, but in the case AGI is delayed by over a century we should get quite worried about other existential risks on our way there. Major contenders are global conflict and terrorism, especially involving nuclear, nano-technological or biological weapons. Even if nano-technology will not reach the level described in Sci-Fi, the bounds given above still allow for sufficient development to make advanced weapons be a question of blueprints and materials. Low probability huge impact risks from global warming are also worth mentioning, if only to note that there are a lot of other people working on them.

What does this tell us about analysing long-term risks like the slithy toves? Well I don't know anything about slithy toves, but let's look at the eugenics stuff discussed earlier and consider how it would influence the probability of major global conflicts, the question is not whether it would increase the risk of global conflict, but how much it would increase the risk of global conflict. On the other hand if AI-safety is already taken care of, it becomes a priority to develop AGI as soon as humanly possible. And then it would be really good if humanly possible was a sigma or so better than today. Still it wouldn't be great, since most of the risks we would be facing at this point would be quite small for each year (as it seems today we could of course get other info on our way there). It's really quite hard to say what would be the proper balance between more intelligent people and more time available at this point, we could say that if we've already had a century to solve the problem more time can't be that useful, on the other hand we could say that if we still haven't solved the problem in a century there are loads of sequential steps to get right we need all the time we can buy.

tldr: No AGI & No Uploads => most X-risk from different types of conflict => eugenics or any kind of superhumans increases X-risk due to risk of war between enhanced and old-school humans

Comment by wuncidunci on Dr. Jubjub predicts a crisis · 2014-01-10T19:42:53.542Z · LW · GW

a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;

This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren't used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this issue, it seems some of the members already doubted whether oak would remain a good material to build ships from for so long.

Also observe that 1900s ≠ 19th century, so they weren't that silly.

Had some trouble finding English references for this, but this (p 4) gives some history and numbers are available in Swedish Wikipedia.

Comment by wuncidunci on The mechanics of my recent productivity · 2014-01-08T17:27:03.882Z · LW · GW

and the dark arts that I use to maintain productivity.

Yes! Please tell us more about these!

Comment by wuncidunci on Defining causal isomorphism · 2013-12-17T23:26:25.649Z · LW · GW

Two points of relevance that I see are:

If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.

If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.

Comment by wuncidunci on Book Review: Computability and Logic · 2013-11-24T14:36:48.900Z · LW · GW

van Dalen's Logic and Structure has a chapter on second order logic, but it's only 10 pages long.

Shapiro's Foundations without Foundationalism has as its main purpose to argue in favour of SOL, I've only read the first two chapters which give philosophical arguments for SOL, which were quite good, but a bit too chatty for my tastes. Chapters 3 to 5 is where the actual logic lives, and I can't say much about them.

Comment by wuncidunci on Book Review: Computability and Logic · 2013-11-21T17:46:38.854Z · LW · GW

Which edition did you read? The image in the post is of the fifth edition, and some people (eg Peter Smith in his Teach Yourself Logic (§2.7 p24)) claim that the earlier editions by just Boolos and Jeffrey are better.

Cutland's Computability and Mendelson's Introduction to Mathematical Logic between them look like they cover everything in this one, and they are both in MIRI's reading list. What is the advantage of adding Computability and Logic to them? (ie is it easier to start out with, does it cover some of the ground between them that both miss, or is it just good with alternatives?)

Comment by wuncidunci on 2013 Census/Survey: call for changes and additions · 2013-11-07T23:19:08.553Z · LW · GW

The questions on Smoking and Nicotine distinctly lack a middle question "Do you use some kind of smokeless tobacco?" (eg I don't smoke but use snuff almost daily).

Comment by wuncidunci on Very Basic Model Theory · 2013-11-01T17:07:19.055Z · LW · GW

Cantor who first did the first work on infinite cardinals and ordinals seemed to have a somewhat mystic point of view some times. He thought his ideas about transfinite numbers were communicated to him from god, whom he also identified with the absolute infinite (the cardinality of the cardinals which is too big to itself be a cardinal). This was during the 19th century so quite recently.

I'd say that much mysticism about foundational issues like what numbers really are, or what these possible infinities actually mean, have been abandoned by mathematicians in favour of actually doing real mathematics. We also have quite good formal foundations in terms of ZF and formal logic nowadays, so discussions like that do not help in the process of doing mathematics (unlike, say, discussions about the nature of real numbers before we had them formalised in terms of Cauchy sequences or Dedekind cuts).

Comment by wuncidunci on Is it immoral to have children? · 2013-10-24T00:12:44.973Z · LW · GW

Coffee purchases seem to be done by near-mode thinking (at least for me), while having children is usually quite planned.

Personally I like giving myself quite a bit of leniency when it comes to impulsive purchases in order to direct my cognitive energy to long-term issues with higher returns. Compare and contrast to the idea of premature optimization in computer science.

Comment by wuncidunci on Relevance of the STEPS project to (F)AI teams · 2013-09-02T18:16:53.635Z · LW · GW

Understanding the OS to be able to optimize better sounds somewhat useful to a self-improving AI.

Understanding the OS to be able to reason properly about probabilities of hardware/software failure sounds very important to a self-improving AI that does reflection properly. (obviously it needs to understand hardware as well, but you can't understand all the steps between AI and hardware if you don't understand the OS)

Comment by wuncidunci on Engaging Intellectual Elites at Less Wrong · 2013-08-15T12:17:09.811Z · LW · GW

Private bittorrent trackers come to mind. Though over there, "good enough" is not measured by quality of conversation, but by your ability to keep up a decent ratio.

Comment by wuncidunci on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T07:53:47.166Z · LW · GW

I've read it but didn't consider the possibility of a twist like that here as well.

Comment by wuncidunci on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T20:05:37.305Z · LW · GW

My largest problem with the Dark Lord == Death theory is that it doesn't really square with Quirrelmort being another super-rationalist and Eliezer's First Law of Fanfiction (You can't make Frodo a Jedi unless you give Sauron the Death Star). Either Quirrelmort is a henchman or personification of Death, which is unlikely considering he is afraid of dying and the dementor try to frighten him in the Humanism arch. Or Quirrelmort is not the Sauron of this story but will help Harry to defeat the main bad guy Death. This could be a really cool ending, but I doubt that it would fit in the remaining arch.

Comment by wuncidunci on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T19:50:48.224Z · LW · GW

Dementors symbolise death. Dementors can destroy humans (by their kiss), and Harry can destroy dementors (by True Patronus). That if anything marks him as Death's equal. If not, dementors obeying him can be understood as him being Death's equal.

Comment by wuncidunci on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-07T07:02:29.452Z · LW · GW

If they run your function from within theirs they simply tell the computer to start reading those instructions, possibly with a timer for stopping detailed in other parts of the comments. If they implement a VM from scratch they can mess with how the library functions work, for instance giving you a time that moves much faster so that your simulation must stop within 0.1s instead of 10 and they can run your code 100 different times to deal with randomness. Now implementing your own VM is probably not the optimal way to do this, you probably just want to do a transformation of the source code to use your own secret functions instead of the standard time ones.

Comment by wuncidunci on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-06T18:44:57.521Z · LW · GW

Unless one of the contestants have time limits on their VM (or on their simulations in general). You can of clearly implement a VM where time goes faster just by pretending they have a slower processor than you really run on.

Comment by wuncidunci on Prisoner's Dilemma (with visible source code) Tournament · 2013-06-06T09:19:01.216Z · LW · GW

Unless the other contestant wrote a virtual machine in which they are running you. Something which I think would be quite doable considering the ridiculously large time you've got (10s gives ~10^10 instructions).

Comment by wuncidunci on Post ridiculous munchkin ideas! · 2013-05-24T08:07:29.542Z · LW · GW

Hasn't been very consistent lately. Might try this later.

Comment by wuncidunci on Welcome to Less Wrong! (5th thread, March 2013) · 2013-05-23T17:30:00.060Z · LW · GW

Not strictly speaking. Warning, what follows is pure speculation about possibilities which may have little to no relation to how a computational multiverse would actually work. It could be possible that there are three computable universes A, B & C, such that the beings in A run a simulation of B appearing as gods to the intelligences therein, the beings in B do the same with C, and finally the beings in C do the same with A. It would probably be very hard to recognize such a structure if you were in it because of the enormous slowdowns in the simulation inside your simulation. Though it might have a comparatively short description as the solution to a an equation relating a number of universes cyclically.

In case that wasn't clear I imagine these universes to have a common quite high-level specification, with minds being primitive objects and so on. I don't think this would work at all if the universes had physics similar to our own; needing planets to form from elementary particles and evolution to run on these planets to get any minds at all, not speaking of computational capabilities of simulating similar universes.

Comment by wuncidunci on Probability is in the Mind · 2013-05-23T15:48:21.500Z · LW · GW

Your question is not well specified. Event though you might think that the proposition "its favorite ball is blue" is something that has a clear meaning, it is highly dependent on to which precision it will be able to see colours, how wide the interval defined as blue is, and how it considers multicoloured objects. If we suppose it would categorise the observed wavelength into one of 27 possible colours (one of those being blue), and further suppose that it knew the ball to be of a single colour and not patterned, and further not have any background information about the relative frequencies of different colours of balls or other useful prior knowledge, the prior probability would be 1/27. If we suppose that it had access to internet and had read this discussion on LW about the colourblind AI, it would increase its probability by doing an update based on the probability of this affecting the colour of its own ball.

Comment by wuncidunci on Welcome to Less Wrong! (5th thread, March 2013) · 2013-05-23T09:19:20.970Z · LW · GW

You seem to be neglecting the possibility of a cyclical god structure. Something which might very well be possible in Tegmark level IV if all the gods are computable.

Comment by wuncidunci on Boring Advice Repository · 2013-05-23T01:13:39.758Z · LW · GW

Note, according to my 30 seconds google scholar search, it is dipping/oral snuff that causes a higher risk of oral cancer. Nasal snuff seems safer (or perhaps less well researched).

Comment by wuncidunci on Post ridiculous munchkin ideas! · 2013-05-22T22:06:12.716Z · LW · GW

That is true. However according to my experience you don't need to spend much time in the library itself if you know what you're looking for (you can always stay for the atmosphere). What takes time is going to and from the library. The value of this time obviously depends on a lot of parameters: is the library close to your route to/from some other place, are you currently very busy, do you enjoy city walks/bike-rides, etc.

Comment by wuncidunci on Boring Advice Repository · 2013-05-22T21:29:53.024Z · LW · GW

Have you tried snuff? It smells quite nice and can help clear your nose as well as deliver nicotine.

Comment by wuncidunci on Post ridiculous munchkin ideas! · 2013-05-22T19:38:35.102Z · LW · GW

I've now tried f.lux for the past week or so. And now I'm disabling it. I like working late at night, and being a student in a term of revising but no lectures, I'm very flexible about what times I have to wake up. So it made me tired when I didn't want to be which was annoying. It did work very well at getting me to bed though, so I'll definitely reenable it when I want to go to bed earlier.

Comment by wuncidunci on Post ridiculous munchkin ideas! · 2013-05-22T19:33:05.133Z · LW · GW

Many library catalogues are searchable online. So you just have to search a different site to wether they have it or not. If they have it, it's probably quicker to take a trip to the library than to wait for shipping.

Comment by wuncidunci on LW Study Group Facebook Page · 2013-04-10T11:43:09.774Z · LW · GW

You could always create a throw-away FB account. But then of course it could be quite inconvenient to sign out of your usual account to sign in on this alternative one.

Comment by wuncidunci on Let's make a "Rational Immortalist Sequence". Suggested Structure. · 2013-02-24T23:24:56.855Z · LW · GW

How is this in any way relevant?

If someone were to write the same proposal from the point of view of a sequence on how to most effectively maximize animal welfare through research and optimal philanthropy, it would hardly be relevant to discuss whether it is unconditionally good to maximize animal welfare. Sure this discussion might be useful to have, but when an article starts with "Suppose that" you don't start by fighting this hypothetical.

Comment by wuncidunci on Open Thread, February 1-14, 2013 · 2013-02-05T00:34:27.805Z · LW · GW

Might work, depends on how inconspicuous and patient your were. Certainly not the first time people have been trying to recruit from/take over another organisation. Writing about it on the internet however will make what you're doing so much more obvious if someone started noticing.

Comment by wuncidunci on Singularity Institute is now Machine Intelligence Research Institute · 2013-01-31T16:00:38.433Z · LW · GW

The same, though Star Trek comes up second. Though google uses a lot of other info about your computer to determine the results (like IP-adress and browser details).

Comment by wuncidunci on Singularity Institute is now Machine Intelligence Research Institute · 2013-01-31T14:44:17.483Z · LW · GW

When I searched the first hit was the Malaysian town called Miri. Looks like an example of filter bubbles.

Comment by wuncidunci on DRAFT:Ethical Zombies - A Post On Reality-Fluid · 2013-01-10T20:34:38.833Z · LW · GW

Let N=3^^^^^^3, surely N nice world + another nice world is better than N nice worlds + a torture world. Why? Because another nice world is better than a torture world, and the prior existence of the N previous worlds shouldn't matter to that decision.

What about the probability of actually being in the torture world which is tiny 1/(N+1), the expected negative utility from this must surely be so small it can be neglected? Sure, but equally the expected utility of being the master of a torture world with probability 1/(N+1) can be neglected.

What this post tells me is that I'm still very very confused about reality fluid.

Comment by wuncidunci on Second-Order Logic: The Controversy · 2013-01-05T02:57:53.499Z · LW · GW

I would have done the following if I had been asked that: calculate which numbers I would have time to count up to before I was thrown out/got bored/died/earth ended/universe ran out of negentropy. I would probably have to answer I don't know, or I think X is a number for some of them, but it's still an answer, and until recently people could not say wether "the smallest n>2 such that there are integers a,b,c satisfying a^n + b^n = c^n" was a number or not.

I'm not advocating any kind of finitism, but I agree that the position should be taken seriously.

Comment by wuncidunci on Godel's Completeness and Incompleteness Theorems · 2012-12-28T01:02:40.903Z · LW · GW

The standard approach in foundations of mathematics is to consider a special first order theory called ZFC, it describes sets, whose elements are themselves sets. Inside this theory you can encode all other mathematics using sets for example by the Von Neumann construction of ordinals. Then you can restrict yourself to the finite ordinals and verify the Peano axioms, including the principle of induction which you can now formulate using sets. So everything turns out to be unique and pinned down inside your set theory.

What about pinning down your set theory? Well most mathematicians don't worry about set theory. The set theorists seem to be quite fine with it not being pinned down, but are sometimes need to be careful about inside which model they are working. A very useful consequence of set theory not being pinned down is a construction called forcing, which allows you to prove the independence of ZFC from the continuum hypothesis (there not being a set of reals which can't be bijected into neither the naturals nor the reals). What you do in this construction is that you work inside a model of set theory which is countable, which allows you to define a special kind of subset that does not exist in the original model but can be used to create a new model where certain properties fail to hold.

Some people want to use second order logic, talking about properties as primitive objects, to get this kind of pinpointing instead. The standard criticism to this is that you need to formalise what you mean by properties through axioms and rules of inference, giving you something quite similar to set theory. I'm not very familiar with second order logic so can't elaborate on the differences or similarities, but it does look like the next post in this sequence will be about it.

Comment by wuncidunci on Ideal Advisor Theories and Personal CEV · 2012-12-25T19:01:13.183Z · LW · GW

I think Sobel's fourth objection is confused about what an idealized/extrapolated agent actually would want. If it had the potential to such perfect experience that makes the human condition look worse than dead in comparison, then the obvious advice is not suicide, but rather to uplift the ordinary human to its own level. This should always be possible since we must already have achieved achieved this to create the extrapolated agent to make the decision, so we can just repeat this process at full resolution on the original human.

Comment by wuncidunci on Godel's Completeness and Incompleteness Theorems · 2012-12-25T13:39:45.164Z · LW · GW

The reason why compactness is not provable from ZF is that you need choice for some kinds of infinite sets. You don't need choice for countable sets (if you have a way of mapping them into the integers that is). You can get a proof of compactness for any countable set of axioms by proving completeness for any countable set of axioms, which can be done by construction of a model as in Johnstone's Notes on Logic and Set Theory p. 25.

Comment by wuncidunci on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-23T20:12:52.189Z · LW · GW

I did not say this view of it would be accurate or rational. Hermione was however very upset in this scene and already saw Draco as a rival for Harry's friendship.

Comment by wuncidunci on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-23T00:07:51.988Z · LW · GW

Considering how Hermione reacted to the Science-with-Draco bit we can guess her reaction to might-marry-Draco-instead. Would totally look to her like Harry tried to keep his options open depending on how his orientation turned out after puberty.

Comment by wuncidunci on Causal Universes · 2012-11-30T00:37:29.322Z · LW · GW

The hypothesis that should interest an AI are not necessarily limited to those it can compute but to those it could test. A hypothesis is useless if it does not tell us something about how the world looks when it's true as opposed to when it's false. So if there is a way for the AI to interact with the world such that it expects different probabilities of outcomes depending on whether the (possibly uncomputable) hypothesis holds or not then it is something worth having a symbol for, even if the exact dynamics of this universe cannot be computed.

Let's consider the case of our AI encountering a Turing Oracle. Two possible hypotheses of the AI could be A = This is in fact a Turing Oracle and for every program P it will output either the time until halting or 0 if no halting, and B = This is not a Turing Oracle but some computable machine Q. The AI could feed the supposed oracle a number of programs and if it was told any of them would halt it could try to run them for the specified number of steps to see if they did indeed halt. After each program had halted it would have to increase it's probability that this was in fact a Turing Oracle using Bayes' Theorem and estimates of the probabilities of guessing this right, or computationally deriving these numbers. If it did this for long enough and this was in fact a Turing Oracle it would gain higher and higher certainty of this fact.

What is it that the AI is doing? We can view the whole above process as a program which given one of a limited set of experimental outcomes outputs the probability that this experimental outcome would be the real one if H held. In the case of the Turing Oracle above the set of outcomes is the set of pairs (P,n) where P is a program and n a positive integer, and the program will output 1 if P halts after n steps and 0 otherwise. I think this captures in full generality all possibilities a computable agent would be able to recognise.

What if the AI later on gains some extra computational capacity which makes it non-computable? Say for example that it finds a Turing Oracle in like in the above example and integrates it into its main processor. But this is essentially everything that could happen: for the AI to become uncomputable, it would have to integrate an uncomputable physical process into its own processing. But for the AI to know it was actually uncomputable and not only incorporating the results of some computational process it didn't recognise it would have to preform above test. So when it now preforms some uncomputable test on a new process we can see this simply as the composite of the tests of the original and the new process viewing all the message passing between the uncomputable processes as a part of the experimental setup rather than internal computation.

Comment by wuncidunci on Causal Universes · 2012-11-29T23:59:45.003Z · LW · GW

Consider instead of time traveling from time T' to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and "B plus A until time T' when it gets destroyed". If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So "B plus A until time T' when it gets destroyed" is better than B which in turn is better than A. So if you want your preferences to be transitive you should prefer the scenario where you destroy A at time T' by time traveling to B.

There are two weaknesses in the above: perhaps A is better than oblivion, but A between the times T and T' is really horrible (ie it is better in long term but negative value in short term). Then you wouldn't prefer having A around for a while over not having it at all. But this is a very exceptional scenario, not the world goes on as usual but you go back and change something to the better that we seem to be discussing.

Another way this can fail is if you don't think that saying you have both universes B and A (for a while) around is meaningful. I agree that it is not obvious what this would actually mean, since existence of universes is not something that's measurable inside said universes. You would need to invent some kind of meta-time and meta-universe, kind of like the simulation scenario EY was describing in the main article. But if you are uncomfortable with this you should be equally uncomfortable with saying that A used to exist but now doesn't, since this is also a statement about universes which only makes sense if we posit some kind of meta-time outside of the universes.

Comment by wuncidunci on Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines. · 2012-11-26T23:20:18.758Z · LW · GW

To be fair the Guardian story only references Terminator in the header. The text body is written by Lord Martin Rees and is a short but clear description of X-risk without any sci-fi references. It also focuses more on other X-risks, perhaps a difference in opinion amongst the founders?

Comment by wuncidunci on Proofs, Implications, and Models · 2012-10-29T22:28:51.228Z · LW · GW

The fact that if we put any two objects into the same (previously empty) basket as any other two object we will in this basket have four objects is true before we can make any definitions. But the statement 2 + 2 = 4 does not make any sense before we have invented: (a) the numerals 2 and 4, (b) the symbol for addition + and (c) the symbol for equality =. When we have invented meanings for these symbols (symbols as things we use in formal manipulations are quite different from words and were invented quite late, much later than we started to actually solve problems mathematically) we have to show that they correspond to our intuitive meaning of putting things into baskets and counting them, but if they do they will also satisfy, say the Peano axioms for the natural numbers, which are the axioms we tend to start from to prove statements like 2+2=4 or "there are infinitely many prime numbers".

If we were to assume differently such that 2+2 =5, then our notion of + and = would not correspond to the notions of adding objects to a basket and counting them. This is because we could walk through our proof step by step (as described in this post) to find the first line where we write down something that is not true for our usual notion of adding apples, there we would have an assumption or a rule of inference that was assumed in this new theory but which does not correspond to apple comparison.

Comment by wuncidunci on Proofs, Implications, and Models · 2012-10-29T21:56:57.649Z · LW · GW

Another data point: in Cambridge the first course in logic done by mathematics undergraduates is in third year. It covers completeness and soundness of propositional and predicate logic and is quite popular. But in third year people are already so specialised that probably way less than half of us take it.

Comment by wuncidunci on Stuff That Makes Stuff Happen · 2012-10-17T21:55:50.883Z · LW · GW

I think the division into problems and exercises usually seen in mathematics texts would be useful: A task is considered an exercise if it's routine application of previous material, it's a problem if it requires some kind of insight or originality. So far most of the Koans have seemed more like problems than like exercises, but depending on content both may be useful. I might be slightly biased towards this as I greatly enjoy mathematics texts and am used to that style.