Posts

Jacques Thibodeau: Accelerating Alignment and AIs improving AIs 2023-05-12T22:20:19.075Z
Toronto AI safety meetup: Latent Knowledge and Contrast-Consistent Search 2023-04-29T23:24:03.191Z
Discuss AI Policy Recommendations 2023-04-25T14:21:29.805Z
Pub meetup: Developmental Milestones 2018-11-14T04:18:03.625Z
Meetup : Toronto EA meetup 2014-12-04T20:19:31.824Z
Meetup : Toronto: making hard decisions 2014-11-15T23:41:59.639Z
Meetup : Toronto: Meet Malo from MIRI 2014-06-10T02:50:33.298Z
Meetup : Toronto: Our guest: Cat Lavigne from the Center for Applied Rationality 2013-04-06T04:46:38.709Z
Meetup : Toronto: fight/flight/freeze experiences 2013-03-17T11:58:07.953Z
Meetup : Toronto - The nature of discourse; what works, what doesn't 2013-02-25T04:20:05.060Z
Meetup : Toronto - What's all this about Bayesian Probability and stuff?! 2013-02-15T17:28:03.102Z
Meetup : Toronto - Rational Debate: Will Rationality Make You Rich? ... and other topics 2013-02-11T01:12:08.680Z
Meetup : Toronto THINK 2012-12-06T20:49:01.072Z
Questions from potential donors to Giving What We Can, 80,000 Hours and EAA 2012-11-11T18:54:32.073Z
Meetup : Toronto THINK 2012-11-10T00:24:57.882Z
Meetup : Toronto THINK 2012-10-22T23:29:26.336Z
Notes from Online Optimal Philanthropy Meetup: 12-10-09 2012-10-13T05:36:29.297Z
Babyeater's dilemma 2011-11-15T20:15:25.446Z
Evolution, bias and global risk 2011-05-23T00:32:08.087Z
People who want to save the world 2011-05-15T00:44:18.347Z
[Altruist Support] Fix the SIAI's website (EDIT: or not. I'll do it) 2011-05-07T18:36:13.570Z
[Altruist Support] The Plan 2011-05-06T03:53:01.128Z
[Altruist Support] LW Go Foom 2011-05-02T02:22:17.129Z
[Altruist Support] How to determine your utility function 2011-05-01T06:33:52.080Z
HELP! I want to do good 2011-04-28T05:29:30.494Z

Comments

Comment by Giles on Rationality Cardinality · 2015-10-04T23:57:28.361Z · LW · GW

I've got a few people interested in an effective altruism version of this, plus a small database of cards. Suggestions on how to proceed?

Comment by Giles on Two arguments for not thinking about ethics (too much) · 2014-03-27T21:34:44.984Z · LW · GW

I got the same piece of advice - to think about things in terms of "wants" rather than "shoulds" or "have tos" - from someone outside the LW bubble, but in the context of things like doing my tax returns.

Comment by Giles on Want to have a CFAR instructor visit your LW group? · 2013-04-22T17:02:46.559Z · LW · GW

She teaches social skills to nerds at CFAR workshops. She has an incredibly positive view of humanity and of what people are capable of, and meeting her massively increased the size of my reference class for what a rational person is like.

Comment by Giles on LW Women Submissions: On Misogyny · 2013-04-14T00:48:23.134Z · LW · GW

LW Women Submissions

a call for anonymous submissions by the women on LW

Seven women submitted

uh... could this be rephrased?

Comment by Giles on Applied Rationality Workshops: Jan 25-28 and March 1-4 · 2013-03-11T04:21:33.799Z · LW · GW

I'm a very satisfied customer from the March workshop. The biggest win for me has been with social skills - it turns out that anxiety had been making me stupid, and that if I de-stress then whole new parts of my brain spring into action. And that was just one of a large number of practical insights. I was amazed at both how much CFAR know about how we can use our brains effectively, and at how much they were able to teach over 4 days. Really impressive, well-run effort with a buzz of "this is where it's happening".

I promised I'd write about this in more detail, so stay tuned!

Comment by Giles on Applied Rationality Workshops: Jan 25-28 and March 1-4 · 2013-03-11T04:05:36.233Z · LW · GW

The best thing about this was that there was very little status dynamic within the CFAR house - we were all learning together as equals.

Comment by Giles on When should you give to multiple charities? · 2013-02-27T17:11:12.669Z · LW · GW
  • Agree with purchasing non-sketchiness signalling and utilons separately. This is especially important if like jkaufman a lot of your value comes as an effective altruist role model

  • Agree that if diversification is the only way to get the elephant to part with its money then it might make sense.

  • Similarly, if you give all your donations to a single risky organization and they turn out to be incompetent then it might demotivate your future self. So you should hedge against this, which again can be done separately from purchasing the highest-expected-value thing.

  • Confused about what to do if we know we're in a situation where we're behaving far from rational agents but aren't sure exactly how. I think this is the case with purchasing xrisk reduction, and with failure to reach Aumann agreement between aspiring effective altruists. To what extent do the rules still apply?

  • Lots of valid reasons for diversification can also serve as handy rationalizations. Diversification feels like the right thing to do - and hey, here are the reasons why! I feel like diversification should feel like the wrong thing to do, and then possibly we should do it anyway but sort of grudgingly.

Comment by Giles on When should you give to multiple charities? · 2013-02-27T16:09:35.428Z · LW · GW

I can imagine having a preference for saving at least X lives

I feel like you've got a point here but I'm not quite getting it. Our preferences are defined over outcomes, and I struggle to see how "saving X lives" can be seen as an outcome - I see outcomes more along the lines of "X number of people are born and then die at age 5, Y number of people are born and then die at age 70". You can't necessarily point to any individual and say whether or not they were "saved".

I generally think of "the utility of saving 6 lives" as a shorthand for something like "the difference in utility between (X people die at age 5, Y people die at age 70) and (X-6 people die at age 5, Y+6 people die at age 70)".

We'd have to use more precise language if that utility varies a lot for different choices of X and Y, of course.

Comment by Giles on An attempt to dissolve subjective expectation and personal identity · 2013-02-25T04:43:09.125Z · LW · GW

I felt like I gained one insight, which I attempted to summarize in my own words in this comment.

It also slightly brought into focus for me the distinction between "theoretical decision processes I can fantasize about implementing" and "decision processes I can implement in practice by making minor tweaks to my brain's software". The first set can include self-less models such as paperclip maximization or optimizing those branches where I win the lottery and ignoring the rest. It's possible that in the second set a notion of self just keeps bubbling up whatever you do.

One and a half insights is pretty good going, especially on a tough topic like this one. Because of inferential distance, what feels like 10 insights to you will feel like 1 insight to me - it's like you're supplying some of the missing pieces to your own jigsaw puzzle, but in my puzzle the pieces are a different shape.

So yeah, keep hacking away at the edges!

Comment by Giles on An attempt to dissolve subjective expectation and personal identity · 2013-02-23T00:53:46.930Z · LW · GW

I can imagine that if you design an agent by starting off with a reinforcement learner, and then bolting some model-based planning stuff on the side, then the model will necessarily need to tag one of its objects as "self". Otherwise the reinforcement part would have trouble telling the model-based part what it's supposed to be optimizing for.

Comment by Giles on Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) · 2013-02-22T13:54:22.213Z · LW · GW

This is like a whole sequence condensed into a post.

Comment by Giles on CEA does not seem to be credibly high impact · 2013-02-22T00:06:25.890Z · LW · GW

Incidentally, in case it's useful to anyone... The way I originally processed the $112M figure (or $68M as it then was), was something along the lines of:

  • $68M pledged
  • apply 90% cynicism
  • that gives $6.8M
  • that's still way too large a number to represent actual ROI from $170K worth of volunteer time
  • how can I make this inconvenient number go away?
  • aha! This is money that's expected to roll in over the next several decades. We really have no idea what the EA movement will turn into over that time, so should apply big future discounting when it comes to estimating our impact

    (note it looks like Will was more optimistic, applying 67% cynicism to get from $400 to $130)

Comment by Giles on CEA does not seem to be credibly high impact · 2013-02-21T23:46:49.041Z · LW · GW

This implies immediately that 75-80% haven't, and in practise that number will be higher care of the self-reporting. This substantially reduces the likely impact of 80,000 hours as a program.

Reduces it from what? There's a point at which it's more cots effective to just find new people than carrying on working to persuade existing ones. My intuition doesn't say much about whether this happy point is above or below 25%.

Good point about self-reporting potentially exaggerating the impact though.

Comment by Giles on CEA does not seem to be credibly high impact · 2013-02-21T23:40:58.947Z · LW · GW

The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn't flat out lie about how much had been pledged (they say "We currently have 291 members ... who together have pledged more than 112 million dollars" which implies an actual total not an estimate).

On the other hand, it's just measuring pledges, it's not an estimate of how much money anyone expects to actually materialise. It hadn't occurred to me that anyone would read it that way - I may be mistaken here though, in which case there's a genuine issue with how the number is being presented.

Anyway, I still wasn't sure the pledge number made sense so I did my own back-of-the-envelope:

£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year

That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:

  • some people will be pledging more than 10%
  • there might be some very high income people mixed in there, dragging the mean up.

So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they'll actually deliver.

Comment by Giles on Philosophical Landmines · 2013-02-09T06:17:21.332Z · LW · GW

I love the landmine metaphor - it blows up in your face and it's left over from some ancient war.

Comment by Giles on Rationality Quotes February 2013 · 2013-02-09T04:34:03.795Z · LW · GW

Did he mean if they're someone else's fault then you have to fix the person?

Comment by Giles on [Link] Power of Suggestion · 2013-02-07T18:03:51.949Z · LW · GW

You also know your own results aren't fraudulent.

Comment by Giles on [Link] Power of Suggestion · 2013-02-07T17:58:06.418Z · LW · GW

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.

He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?

Also, yay priming! (I don't want this comment to sound negative about priming as such)

Comment by Giles on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T19:19:38.123Z · LW · GW

2 sounds wrong to me - like you're trying to explain why having a consistent internal belief structure is important to someone who already believes that.

The things which would occur to me are:

  • If both of you are having reactions like this then you're dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues - if you can both get curious - then that's great. But I don't know how to make that happen.
  • Does your friend actually have any contradictory beliefs? Do they believe that they do?
  • You could escalate - point out every time your friend applies a math thing to social justice. "2000 people? That's counting. You're applying a math thing there." "You think this is better than that? That's called a partial ordering and it's a math thing". I'm not sure I'd recommend this approach though.
Comment by Giles on How to offend a rationalist (who hasn't thought about it yet): a life lesson · 2013-02-06T18:50:52.591Z · LW · GW

This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it?

Remember you have to make a convincing case without using stuff like logic

Comment by Giles on Thoughts on the January CFAR workshop · 2013-02-05T14:34:52.086Z · LW · GW

Not that I know of

Any advice on how to set one up? In particular how to add entries to it retrospectively - I was thinking about searching the comments database for things like "I intend to", "guard against", "publication bias" etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is "oh I've just finished my write-up (or am just about to), now I'll go and add the original comment to the anti-publication bias registry".

On the other hand it seems like anyone can safely add anyone else's comment to the registry as long as it's close enough in time to when the comment was written.

Any advice? (I figured if you're involved at CFAR you might know a bit about this stuff).

Comment by Giles on Offer: I'll match donations to the Against Malaria Foundation · 2013-02-04T22:44:00.389Z · LW · GW

This is interesting. People who are vulnerable to the donor illusion either have some of their money turned into utilons, or are taught a valuable lesson about the donor illusion, possibly creating more utilons in the long term.

Comment by Giles on Thoughts on the January CFAR workshop · 2013-02-04T22:27:10.663Z · LW · GW

This is useful to me as I'll be attending the March workshop. If I successfully digest any of the insights presented here then I'll have a better platform to start from. (Two particular points are the stuff about the parasympathetic nervous system, which I'd basically never heard of before, and the connection between the concepts of "epistemic rationality" and "knowing about myself" which is more obvious-in-retrospect).

Thanks for the write-up!

And yes, I'll stick up at least a brief write-up of my own after I'm done. Does LW have an anti-publication-bias registry somewhere?

Comment by Giles on Value-Focused Thinking: a chapter-by-chapter summary · 2013-02-03T04:03:49.227Z · LW · GW

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment by Giles on Value-Focused Thinking: a chapter-by-chapter summary · 2013-02-03T04:03:11.066Z · LW · GW

There's probably better stuff around, but it made me think of Hanson's comments in this thread:

http://lesswrong.com/lw/v4/which_parts_are_me/

Comment by Giles on S.E.A.R.L.E's COBOL room · 2013-02-02T17:18:55.468Z · LW · GW

just because a computer is doing something that a human could not do without understanding does not mean the computer must be understanding it as well

I think linking this concept in my mind to the concept of the Chinese Room might be helpful. Thanks!

Comment by Giles on Course recommendations for Friendliness researchers · 2013-01-10T03:23:53.676Z · LW · GW

More posts like this please!

Comment by Giles on 2012 Winter Fundraiser for the Singularity Institute · 2012-12-09T21:28:21.481Z · LW · GW

As part of Singularity University's acquisition of the Singularity Summit, we will be changing our name and ...

OK, this is big news. Don't know how I missed this one.

Comment by Giles on How to incentivize LW wiki edits? · 2012-12-09T21:01:27.410Z · LW · GW

Appoint a chief editor. Chief's most important job would be to maintain a list of what most urgently needs adding or expanding in the wiki, and posting a monthly Discussion post reminding people about these. (Maybe choosing a different theme each month and listing a few requested edits in that category, together with a link to the wiki page that contains the full list).

When people make these changes, they can add a comment and chief editor (or some other high status figure) will respond with heaps of praise.

People will naturally bring up any other topics they'd like to see on the wiki or general comments about the wiki. Chief editor should take account of these and where relevant bring them up with the relevant people (e.g. the programmers).

Comment by Giles on Akrasia hack survey · 2012-11-30T19:40:45.628Z · LW · GW

Do you think "ugh" should be listed as a response to survey questions? (Or equivalently a check box that says "I've left some answers blank due to ugh field rather than due to not reading the question" - not possible with the current LW software, just brainstorming)

Comment by Giles on Should correlation coefficients be expressed as angles? · 2012-11-30T02:29:02.327Z · LW · GW

This might be helpful - thanks.

Comment by Giles on Akrasia hack survey · 2012-11-30T02:25:14.961Z · LW · GW

My answer for Exercise would be "I am trying this hack right now and so the results haven't come in yet" (so I answered "write-in").

I answered "I feel I should try" for lukeprog's algorithm, but it's really more of a "I'll put it on my list of hacks to try at some point, but with low priority as there's a whole bunch of others I should try first".

I like the title too, especially as it gives no information about what the survey is going to be about. (Still might be distorted as people's productivity experiences might correlate with how much time they spend filling in surveys on LW... but not sure there's much that we can do about that)

Comment by Giles on The Evil AI Overlord List · 2012-11-21T14:07:40.707Z · LW · GW

and if the AI can tell if its in a simulation vs the real world then its not really a test at all.

The AI would probably assign at least some probability to "the humans will try to test me first, but do a poor job of it so I can tell whether I'm in a sim or not"

Comment by Giles on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-21T04:50:23.739Z · LW · GW

If I understand Will's response correctly (under "Earmarking"), it's best to think of GWWC, 80K, EAA and LYCS as separate organizations (at least in terms of whose money will be used for what, which is what really matters). I don't know if this addresses your concern though.

I admit it makes the actual physical donation process look slightly clunky (no big shiny donate button), but my impression is they're not targeting casual donors so much so this may not be such a problem.

Comment by Giles on Responses to questions on donating to 80k, GWWC, EAA and LYCS · 2012-11-21T01:37:40.986Z · LW · GW

This is really detailed, and exceeded my expectations! Thank you!

Comment by Giles on Room for more funding at the Future of Humanity Institute · 2012-11-19T17:57:28.712Z · LW · GW

Oh wow, totally wasn't expecting you to go ahead and answer that particular list of questions. Thanks for being so proactive!

Questions 7-11 aren't really relevant to FHI. Question 16 is relevant (at least the the "are there other orgs similar to you?" part) but I'm guessing you'd answer no to that?

The other answers are helpful, thanks!

Comment by Giles on [SEQ RERUN] Surprised By Brains · 2012-11-18T23:17:27.907Z · LW · GW

Actually, the relevant thing isn't whether it's superlinear but whether a large AI/firm is more innovative than a set of smaller ones with the same total size. I was assuming that the latter would be linear, but it's probably actually sublinear as you'd expect different AIs/firms to be redundantly researching the same thing.

Comment by Giles on [SEQ RERUN] Surprised By Brains · 2012-11-18T03:43:00.802Z · LW · GW

Big thank you to Hanson for helping illuminate what it is he thinks they're actually disagreeing about, in this comment:

Eliezer, it may seem obvious to you, but this is the key point on which we've been waiting for you to clearly argue. In a society like ours, but also with one or more AIs, and perhaps ems, why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?

Just a thought: given a particular state-of-the-art, does an AI's innovation rate scale superlinearly with its size? If it does, an AI could go something like foom even if it chose to trade away all of its innovations, as it would stay more productive than all of its smaller competitors and just keep on growing.

The analogy with firms would suggest it's not like this; the analogy with brains is less clear. Also I get the sense that this doesn't correctly describe Yudkowsky's foom (which is somehow more meta-level than that).

Comment by Giles on What does the world look like, the day before FAI efforts succeed? · 2012-11-18T02:39:26.440Z · LW · GW

the only person so far to actually answer the goddamn prompt

What's worse is I wasn't even consciously aware that I was doing that. I'll try and read posts more carefully in the future!

Comment by Giles on What does the world look like, the day before FAI efforts succeed? · 2012-11-18T02:34:44.370Z · LW · GW

OK - I wasn't too sure about how these ones should be worded.

Comment by Giles on What does the world look like, the day before FAI efforts succeed? · 2012-11-17T22:23:35.995Z · LW · GW

Another dimension: value discovery.

  • Fantastic: There is a utility function representing human values (or a procedure for determining such a function) that most people (including people with a broad range of expertise) are happy with.
  • Pretty good: Everyone's values are different (and often contradict each other), but there is broad agreement as to how to aggregate preferences. Most people accept FAI needs to respect values of humanity as a whole, not just their own.
  • Sufficiently good: Many important human values contradict each other, with no "best" solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached.
Comment by Giles on What does the world look like, the day before FAI efforts succeed? · 2012-11-17T19:32:18.738Z · LW · GW

I like how you've partitioned things up into IA/government/status/memes/prediction/xrisk/security and given excellent/good/ok options for each. This helps imagine mix-and-match scenarios, e.g. "FAI team has got good at security but sucks at memes and status".

A few quick points:

The fantastic list has 8 points and the others have 7 (as there are two "government" points). This brings me on to...

Should there be a category for "funding"? The fantastic/good/ok options could be something like:

  • Significant government funding and/or FAI team use their superior rationality to acquire very large amounts of money through business
  • Attracts small share of government/philanthropy budgets and/or a lot of small donations from many individuals
  • Shoestring budget, succeeds anyway because the problem turns out to not be so hard after all once you've acquired the right insights

Does it have to be the FAI team implementing the "other xrisk reduction efforts" or can it just be "such institutions exist"?

Comment by Giles on Room for more funding at the Future of Humanity Institute · 2012-11-17T18:00:28.190Z · LW · GW

Another donation opportunity came up recently, which I responded to with a big long list of questions and I'll put the answers up when I get them. People seemed to like this approach - can we do something similar for the FHI?

Some thoughts:

  • Are the people at FHI going to be too busy to answer this kind of stuff?
  • Are they likely to be limited in how candid they can be with their answers if the answers are going to be made public?
  • I'm guessing Stuart or Sean would be the people you'd recommend talking to?
Comment by Giles on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-17T17:48:57.762Z · LW · GW

Note that the Tides Foundation is not the same thing as CEA. I'm not sure what CEA's exact relationship is with the Tides Foundation - I'll add this to the list of questions.

My guess would be that the relationship to Tides is necessary in order to get US tax deductability (CEA is based in the UK), and that splitting off 80K and GWWC from each other wouldn't help with that. I will ask though.

Comment by Giles on [LINK] Steven Landsburg "Accounting for Numbers" - response to EY's "Logical Pinpointing" · 2012-11-15T03:18:28.048Z · LW · GW

Yes - fair enough.

Comment by Giles on Simulationist sympathetic magic · 2012-11-14T19:38:19.944Z · LW · GW

How many of them wouldn't believe it if it wasn't working?

Comment by Giles on Logical Pinpointing · 2012-11-14T16:22:39.055Z · LW · GW

Just to be clear, I assume we're talking about the second order Peano axioms here?

Comment by Giles on [LINK] Steven Landsburg "Accounting for Numbers" - response to EY's "Logical Pinpointing" · 2012-11-14T16:18:32.444Z · LW · GW

Heh - I'm amazed at how many things in this post I alternately strongly agree or strongly disagree with.

It’s important to distinguish between the numeral “2″, which is a formal symbol designed to be manipulated according to formal rules, and the noun “two”, which appears to name something

OK... I honestly can't comprehend how someone could simultaneously believe that "2" is just a symbol and that "two" is a blob of pure meaning. It suggests the inferential distances are actually pretty great here despite a lot of surface similarities between what Landsburg is saying and what I would say.

Instead, the point of the Peano axioms was to model what mathematicians do when they’re talking about numbers

I'm happy with this (although the next sentence suggests I may have been misinterpreting it slightly). Another way I would put it is that the Peano axioms are about making things precise and getting everyone to agree to the same rules so that they argue fairly.

Like all good models, the Peano axioms are a simplification that captures important aspects of reality without attempting to reproduce reality in detail.

I'd like an example here.

I’d probably start with something like Bertrand Russell’s account of numbers: We say that two sets of objects are “equinumerous” if they can be placed in one-one correspondence with each other

This is defining numbers in terms of sets, which he explicitly criticizes Yudkowsky for later. I'll take the charitable interpretation though, which would be "Y thinks he can somehow avoid defining numbers in terms of sets... which it turns out he can't... so if you're going to do that anyway you may as well take the more straightforward Russell approach"

the whole point of logic is that it is a mechanical system for deriving inferences from assumptions, based on the forms of sentences without any reference to their meanings

I really like this definition of logic! It doesn't seem to be completely standard though, and Yudkowksy is using it in the more general sense of valid reasoning. So this point is basically about the semantics of the word "logic".

But Yudkowsky is trying to derive meaning from the operation of inference, which won’t work because in second-order logic, meaning comes first.

I think this is a slight strawmanning of Yudkowsky. Here Yudkowsky is trying to define the meaning of one particular system - the natural numbers - not define the entire concept of meaning.

he’s effectively resorted to taking sets as primitive objects

So when studying logic, model theory etc. we have to make a distinction between the system of reasoning that we are studying and "meta-reasoning". Meta-reasoning can be done in natural language or in formal mathematics - generally I prefer when it's a bit more mathy because of the thing of precision/agreeing to play by the rules that I mentioned earlier. I don't see math and natural language as operating at different levels of abstraction though - clearly Landsburg disagrees (with the whole 2/two thing) but it's hard for me to understand why.

Anyway, if you're doing model theory then your meta-reasoning involves sets. If you use natural language instead though, you're probably still using sets at least by implication.

full-fledged Platonic acknowledgement

I think this is what using natural language as your meta-reasoning feels like from the inside. Landsburg would say that the natural numbers exist and that "two" refers to a particular one; in model theory this would be "there exists a model N with such-and-such properties, and we have a mapping from symbols "0", "1", "2" etc. to elements of N".

It seems enormouosly more plausible to me that numbes are “just out there” than that physical objects are “just out there”

So he's saying that the existence of numbers implies the existence of the physical universe, and therefore that the existence of numbers is more likely?? Or is there some quality of "just out there"ness that's distinct from existence?

Comment by Giles on Logical Pinpointing · 2012-11-14T14:53:25.454Z · LW · GW

I'm not quite sure what you're saying here - that "Numbers" don't exist as such but "the even naturals" do exist?

Comment by Giles on Simulationist sympathetic magic · 2012-11-14T00:25:46.927Z · LW · GW

I assume that if a statistically significant number of people noticed that they were trying sympathetic magic and it was working, then the simulation would have to be restarted or tweaked since it could alter the history of the world in significant ways. So you might want to plan out that aspect of your strategy before collecting any data.