Posts

Logic: the science of algorithm evaluating algorithms 2012-02-22T18:13:18.150Z
Quantum Russian Roulette 2009-09-18T08:49:43.865Z

Comments

Comment by Christian_Szegedy on Logic: the science of algorithm evaluating algorithms · 2012-02-24T02:47:37.787Z · LW · GW

In order to define the semantics,... This isn't precise enough for me to agree that it's true. Is it a claim? A new definition?

First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:

"In order to define semantics, we need to define a map between the logic to the model ...."

It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: "For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold."

Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.

The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.

Although we can not check verify the correctness ... .. What do you mean? Are you talking about something other process than the proof checking program?

It is more model checking: Given a statement like "For each x: F(x)", since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.

I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:

foreach x: exists y: prime(y) and prime(y+2)

Still this expression can be cast into the form: "T(x) halts for every input x.", where T is the program that searches for the next pair of twin primes both bigger than n.

But what about longer sequences of quantifier, like

f = foreach a: exists b: foreach c: exists d: .... F(a,b,c,d,...)

Can this still be casted into the form "T(x) halts for every input x"? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.

In fact, there is a neat trick to avoid that: You can define a program T'(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T'(n) does halt for every natural number n.

Which means we manage to hide the cascade of quantifiers within algorithm T'. Cool, hugh?

Comment by Christian_Szegedy on Logic: the science of algorithm evaluating algorithms · 2012-02-23T03:53:40.187Z · LW · GW

The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.

A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.

I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one's own world view.

I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.

Comment by Christian_Szegedy on Logic: the science of algorithm evaluating algorithms · 2012-02-23T02:01:29.443Z · LW · GW

I expected the reaction with the countably infinite models, but I did not expect it to be the first. ;)

I wanted to get into that in the write up, but I had to stop at some point. The argument is that in order to have scientific theories, we need to have falsifiability, which means that this always necessarily deals with a discrete projection of the physical world. On the other hand so far every discrete manifestation of physical systems seemed to be able to be modelled by Turing machines. (This assumption is called the Church thesis.) If you add these two, then you arrive by the above conclusion.

Another reason for it not being a problem is that every (first order) axiom system has a countable model by the theorem of Lowenheim-Skolem. (Yes, even the theory of real numbers have a countable model, which is also known as the Skolem paradox.)

Actually, I don't think that the technical content of the write-up is novel, it is probably something that was already clear to Turing, Church, Goedel and von Neumann in the 40-50ies. IMO, the takeaway is a certain, more pragmatic, way of thinking about logic in the age information processing, instead of sticking to an outdated intuition. Also the explicit recognition that the domains of mathematical logic and AI are much more directly connected than it would seem naively.

Comment by Christian_Szegedy on Rationality quotes: September 2010 · 2010-09-15T06:51:28.390Z · LW · GW

The object of life is not to be on the side of the majority, but to escape finding oneself in the ranks of the insane.

Marcus Aurelius

Comment by Christian_Szegedy on Less Wrong: Open Thread, September 2010 · 2010-09-08T07:08:36.792Z · LW · GW

I agree with you. I also think that there are several reasons for that:

First that competitive games are (intellectual or physical sports) easier to select and train for, since the objective function is much clearer.

The other reason is more cultural: if you train your child for something more useful like science or mathematics, then people will say: "Poor kid, do you try to make a freak out of him? Why can't he have a childhood like anyone else?" Traditionally, there is much less opposition against music, art or sport training. Perhaps they are viewed as "fun activities."

Thirdly, it also seems that academic success is the function of more variables: communication skills, motivation, perspective, taste, wisdom, luck etc. So early training will result in much less head start than in a more constrained area like sports or music, where it is almost mandatory for success (age of 10 (even 6) are almost too late in some of those areas to begin seriously)

Comment by Christian_Szegedy on Something's Wrong · 2010-09-07T18:26:45.628Z · LW · GW

I found the most condensed essence (also parody) of religious arguments for fatalism in Greg Egan's Permutation City:

Even though I know God makes no difference. And if God is the reason for everything, then God includes the urge to use the word God. So whenever I gain some strength, or comfort, or meaning, from that urge, then God is the source of that strength, that comfort, that meaning. And if God - while making no difference - helps me to accept what's going to happen to me, why should that make you sad?

Logically irrefutable, but utterly vacuous...

Comment by Christian_Szegedy on Rationality quotes: September 2010 · 2010-09-03T18:05:47.011Z · LW · GW

You should not take the statement too literally: Look it in a historical context. Probably the biggest problems at Russel's time were wars caused by nationalism and unfair resource allocation due to bad (idealistic/traditionalist) policies.. Average life expectancy was around 40-50 years. I don't think anyone considered e.g. a mortality a problem that can or should be solved. (Neither does over 95% of the people today). Population was much smaller. Earth was also in a much more pristine state than today.

Times have changed. We have more technical issues today, since we can address more issues with technology, plus we are on a general trajectory today, which is ecologically unsustainable if we don't manage to invent and use the right technologies quickly. I think this is the fundamental mistake traditional ecological movements are making: There is no turning back. We either manage to progress rapidly enough to counteract what we broke (and will inevitably break) or our civilization collapses. There is no stopping or turning back, we have already bet our future. Being reasonable would have worked 100 years ago, today we must be very smart as well.

Comment by Christian_Szegedy on Rationality quotes: September 2010 · 2010-09-03T06:07:20.305Z · LW · GW

I just find it a bit circular that you want evidences for the assertion saying that assertions need evidences.

Comment by Christian_Szegedy on Berkeley LW Meet-up Sunday September 5 · 2010-09-02T23:23:47.125Z · LW · GW

I'd prefer Sunday or Saturday (9/5 would work for me.)

Comment by Christian_Szegedy on Rationality quotes: September 2010 · 2010-09-02T23:08:57.847Z · LW · GW

Why? Do you agree with him? :)

Comment by Christian_Szegedy on Rationality Lessons in the Game of Go · 2010-08-25T00:04:48.280Z · LW · GW

Did you try GNU Go? That should be hard enough for most beginners.

The problem with GNUgo is that it teaches a style that would not be effective in beating humans. Generally, you have to build up moderately difficult situations, where you have a deep sequence of forcing moves. These kind of deep but simple to prune trees are very easily read by humans, but GNUgo sucks at them, especially if they are on the interaction boundary of bigger fights.

Still it can be valuable learning tool, but one will learn a different skill set to playing with humans.

Comment by Christian_Szegedy on Book Recommendations · 2010-08-11T23:33:14.105Z · LW · GW

Let me advertise my absolute favorite: an obscure Hungarian writer called Geza Csath. He was a doctor and journalist at beginning of the 20th century and he wrote the most beautiful and objective stories on self-deception and other human weaknesses. Highly accurately, without moralizing or romanticizing.

I've only read the originals, but I hope the translation is not too bad. (Unfortunately, in English, it is available only used: http://www.amazon.com/Magicians-Garden-Other-Stories/dp/0231047320/ref=sr_1_4?s=books&ie=UTF8&qid=1281569193&sr=1-4

A good book I am reading currently is The Art Of Choosing by Sheena Iyengar.

Comment by Christian_Szegedy on Five-minute rationality techniques · 2010-08-11T22:58:44.395Z · LW · GW

I think it is mostly hopeless trying to teach rationality to most people.

For example, both of my parents studied Math in university and still have a very firm grip of the fundamentals.

I just got a phone call yesterday from my father in Germany saying: "We saw in the news, that a German tourist couple got killed in a shooting in San Francisco. Will you avoid going out after dark?" When I tried to explain that I won't update my risk estimates based on any such singular event, he seemed to listen to and understand formally what I said. Anyhow, he was completely unimpressed, finishing the conversation in an even more worried tone: "I see, but you will take care, won't you?"

Comment by Christian_Szegedy on Open Thread: July 2010, Part 2 · 2010-07-26T23:39:26.465Z · LW · GW

He he, poor WW2 veterans miss the deadline by just one year:

2044 - The last veterans of WW2 are passing away

2045 - Humans are becoming intimately merged with machines

Comment by Christian_Szegedy on Open Thread: July 2010, Part 2 · 2010-07-26T23:33:31.473Z · LW · GW

2031 – Web 4.0 is transforming the Internet landscape

Could be funny, if it was a joke... :(

Comment by Christian_Szegedy on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-16T23:14:58.633Z · LW · GW

That's true.

But the parallel was a bit more specific: "Good sense of humor" (which he concretely brought up as the most typical example) is an attribute one can easily claim to have as it is impossible to measure.

Comment by Christian_Szegedy on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-16T17:51:44.143Z · LW · GW

You missed my point. (Which you would not have, if you had read The Upside of Irrationality by Dan Ariely, an excellent book.)

Comment by Christian_Szegedy on What Intelligence Tests Miss: The psychology of rational thought · 2010-07-14T19:34:08.166Z · LW · GW

This sounds like "I am not very good looking, but have a great sense of humor." ;)

Comment by Christian_Szegedy on Open Thread: July 2010, Part 2 · 2010-07-13T01:01:51.569Z · LW · GW

... and different to almost any other unproven technology (for the exact same reason).

Comment by Christian_Szegedy on Open Thread: July 2010, Part 2 · 2010-07-13T00:40:58.408Z · LW · GW

There are a lot of alternatives to fusion energy and since energy production is a widely recognized societal issue, making individual bets on that is not an immediate matter of life and death on a personal level.

I agree with you, though, that a sufficiently high probability estimate on the workability of cryonics is necessary to rationally spend money on it.

However, if you give 1% chance for both fusion and cryonics to work, it could still make sense to bet on the latter but not on the first.

Comment by Christian_Szegedy on Open Thread: July 2010, Part 2 · 2010-07-13T00:30:37.311Z · LW · GW

I think one of the reasons this self-esteem seeding works is that identifying your core values makes other issues look less important.

On the other hand, if you e.g. independently expressed that God is an important element of your identity and belief in him is one of your treasured values, then it may backfire and you will be even harder to move you away from that. (Of course I am not sure: I have never seen any scientific data on that. This is purely a wild guess.)

Comment by Christian_Szegedy on Rationality Quotes: July 2010 · 2010-07-12T18:59:54.875Z · LW · GW

In my reading, the assessment was funny exactly because it was emotional and therefore biased. That's what use of "son of a bitch" suggested as well.

Comment by Christian_Szegedy on Rationality Quotes: July 2010 · 2010-07-08T23:59:44.267Z · LW · GW

I am stunned by the relatively high mod-points of this exchange.

I agree that the quotes are moderately funny. (Albeit the M.S. quote was much more funny in the specific context within the game, but even there it was his white-wash response to an action that earned Shepard renegade points.)

Still, I can't see, how all this is related to the "art of human rationality"...

Comment by Christian_Szegedy on Rationality Quotes: July 2010 · 2010-07-08T21:24:26.714Z · LW · GW

I think, the quote is useless and in rooted in a fundamental misunderstanding of IT.

E.g. experimental mathematics would not exist without computers. Computer simulation is fantastic way to empirically produce and check hypotheses.

Comment by Christian_Szegedy on Rationality Quotes: July 2010 · 2010-07-07T21:33:54.630Z · LW · GW

Why do you care? You should not follow it anyways. ;)

Comment by Christian_Szegedy on Open Thread: July 2010 · 2010-07-07T21:21:06.967Z · LW · GW

This is already exploited on cell phones to some extent.

Comment by Christian_Szegedy on Rationality Quotes: July 2010 · 2010-07-07T21:00:36.765Z · LW · GW

We don't need anyone to tell us what to do. Not Savonarola, not the Medici. We are free to follow our own path. There are those who will take that freedom from us, and too many of you gladly give it. But it is our ability to choose- whatever you think is true- that makes us human...There is no book or teacher to give you the answers, to show you the way. Choose your own way! Do not follow me, or anyone else.

(Assassin's Creed II, Ezio Auditore's speech)

Comment by Christian_Szegedy on A Challenge for LessWrong · 2010-06-30T19:06:05.298Z · LW · GW

Rationalists should win

I hate to see this clever statement to be taken out of context and being reinterpreted as a moralizing slogan.

If you trace it back, this was originally a desideratum on rational thinking, not some general moral imperative.

It plainly says that if your supposedly "rational" strategy leads to a demonstrably inferior solution or gets beaten by some stupider looking agent, then the strategy must be reevaluated as you have no right to call it rational anymore.

Comment by Christian_Szegedy on Open Thread June 2010, Part 4 · 2010-06-25T18:42:23.694Z · LW · GW

Chimps copy high status individuals in their groups

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-22T22:31:22.686Z · LW · GW

In my reading it means, that there are already actual implementations for all probability inference operations that the authors consider in the book.

This has been probably a true statement, even in the 60'ies. It does not mean that the robot as a whole is resource-wise feasible.

An analogy: It is not hard to implement all (non-probabilistic) logical derivation rules. It is also straightforward to use them to generate all true mathematical theorems (e.g. within ZFC). However this does not imply that we have an practical (i.e. efficient) general purpose mathematical theorem-prover. It gives an algorithm to prove every provable theorems eventually, but its run-time consumption makes this approach practically useless.

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-18T23:40:16.651Z · LW · GW

Sorry for the confusion. I was very superficial. Of course, your are correct about being able to simplify out those values.

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-18T07:58:34.623Z · LW · GW

Fascinating talk (Highly LW-relevant)

http://www.ted.com/talks/michael_shermer_the_pattern_behind_self_deception.html

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-18T06:45:10.904Z · LW · GW

Actually, solving SAT problems is just the simplest case. Even so, if you have only certain variables (with either 0 or 1 plausibility), it's still NP-complete, you can't just simplify them in polynomial time. [EDIT: This is wrong as Jonathan pointed it out.]

In extreme case, since we also have the rule that "robot" has to use all the available information to the fullest extent, it means that the "robot" must be insanely powerful. For example if the calculation of some plausibility value depends for example the correctness of an algorithm (known by the "robot", with a very high probability), then it will have to be able to solve the halting problem in general.

Even if you constrain your probability values to be never certain or impossible, you can always chose small (or large) enough values, so that the computation of the probabilities can be used to solve the discrete version of the problem.

For example, in the simplest case: if you just have a set of propositions in (let us say in conjunctive normal form), the consistency desideratum implies the ability of the "robot" to solve SAT problems, even if the starting plausibility values for the literals fall into the open (0,1) interval.

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-17T18:42:09.543Z · LW · GW

Yes, I've read that (pretty good) book quite a while ago and it is also referenced in the TED talk I mentioned.

This was one of the reasons I was surprised that there is still such a huge disagreement about the figures even among experts.

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-17T18:36:17.286Z · LW · GW

Sorry, I never tried to imply that an AI built on the Bayesian principles is impossible or even a bad idea. (Probably, using Bayesian inference is a fundamentally good idea.)

I just tried to point out that easy looking principles don't necessarily translate to practical implementations in a straightforward manner.

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-16T22:44:00.769Z · LW · GW

I think it is impossible to decide this based on Chapter 1 alone, for the second criterion (qualitative correspondence with common sense) is not yet specified formally.

If you look into Chapter 2, the derivation of the product rule, he uses this rubber-assumption to get the results he aims for (very similarly to you).

I think one should not take some statements of the author like ("... our search for desiderata is at an end... ") too seriously.

In some sense this informative approach is defensible, from another perspective it definitely looks quite pretentious.

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-16T21:09:04.580Z · LW · GW

These are perfectly valid arguments and I admit that I share your skepticism concerning the economic competitiveness of the fusion technology. I admit, if I had a decision to make about buying some security, the payout of which would depend on the amount of energy produced by fusion power within 30 years, I would not hurry to place any bet.

What I lack is your apparent confidence in ruling out the technology based on the technological difficulties we face at this point in time.

I am always surprised how the opinion of so called experts diverges when it comes to estimating the feasibility and cost of different energy production options (even excluding fusion power). For example there is recent TED video where people discuss the pros and cons of nuclear power. The whole discussion boils down to the question: What are the resources we need in order to produce X amount of energy using

  • nuclear
  • wind
  • solar
  • biofuel
  • geothermal

power. For me, the disturbing thing was that the statements about the resource usage (e.g. area consumption, but also risks) of the different technologies were sometimes off by magnitudes.

If we lack the information to produce numbers in the same ballpark even for technologies that we have been using for decades (if not longer), then how much confidence can we have about the viability, costs, risks and competitiveness of a technology, like fusion, that we have not even started to tap.

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-16T19:40:02.885Z · LW · GW

Funny, I've been entertaining the same idea for a few weeks.

Every time I read statements like "... and then I update the probabilities, based on this evidence ...", I think to myself: "I wish I had the time (or processing power) he thinks he has. ;)"

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-16T19:23:07.412Z · LW · GW

Imagine what people must have thought in 1910 about the feasibility of getting to the Moon or generating energy by artificially splitting atoms (especially within the 20th century).

Comment by Christian_Szegedy on Book Club Update and Chapter 1 · 2010-06-15T19:07:50.840Z · LW · GW

Speaking of Chapter 1, it seems essential to point out another point that may be unclear on superficial reading.

The author introduces the notion of a reasoning "robot" that maintains a consistent set of "plausibility" values (probabilities) according to a small set of rules.

To a modern reader, it may make the impression that the author here suggests some practical algorithm or implementation of some artificial intelligence that uses Bayesian inference as a reasoning process.

I think, this misses the point completely. First: it is clear that maintaining such a system of probability values even for a set of simply Boolean formulas (consistently!) amounts to solving SAT problems and therefore computationally infeasible in general.

Rather, the author's purpose of introducing the "robot" was to avoid the misconception that plausibility desiderata are some subjective, inaccurate notions that depend on some hidden features of the human mind. So by detaching the inference rule from the human mind and using a idealized "robot", the author wants to argue that these axioms and their consequences can and should be studied mathematically and independently from all other features and aspects of human thinking and rationality.

So here the objective was not to build some intelligence, rather study an abstract and computationally unconstrained version of intelligence obeying the above principles alone.

Such an AI will never be realized in practice (due to inherent complexity limitations, and here I don't just speak about P!=NP !), Still, if we can prove what this theoretical AI will have to do in certain specific situations, then we can learn important lessons about the above principles, or even guide our decisions by that insights we gained from that study.

Comment by Christian_Szegedy on Open Thread June 2010, Part 3 · 2010-06-14T22:46:35.821Z · LW · GW

Your numbers seem to be off: (e.g. 4.26e9 J/sec would be truly minsiscule) You probably meant 4.29e29 J/sec, but then 5.74e5 years are wrong. According to wikipedia, the Sun's energy output is: 1.2e34 J/s which is still at odd with both of your numbers.

Comment by Christian_Szegedy on Less Wrong Book Club and Study Group · 2010-06-11T00:40:59.652Z · LW · GW

I think one should learn on different levels at the same time:

If you only do what's convenient, your progress stops.

If you don't revisit the basics from time to time, you build on sand.

It is necessary to challenge oneself and at the same time work on the fundamentals. It is both inspiring and necessary to strike the right balance between the two extremes: A constant back and forth between them proved to be both the most productive and most entertaining for me personally.

This is the reason I am also interested in this study group: For me, it is revisiting the fundamentals. Although this book is relatively basic, it is very well written and focuses more on the right philosophy than the actual pragmatic issues. On the other hand, it is very detailed at places that other books easily take for granted and points out issues that other just step over. It is really a great reading to deepen one's knowledge. I am unsure though, whether it is the best introductory reading for someone who just wants to acquire a practical skill set.

Comment by Christian_Szegedy on Less Wrong Book Club and Study Group · 2010-06-10T21:28:31.175Z · LW · GW

I think some interactive discussion would definitely help to keep up the spirit.

I'd definitely be interested in joining a real time discussion if there is enough substance for an clear agenda. Using IRC with pidgin-latex sounds good to me.

It is also not strictly necessary that everybody participates at the same time: we could have two meetings, for two different time zones discussing the same topic.

Comment by Christian_Szegedy on Less Wrong Book Club and Study Group · 2010-06-09T21:28:11.792Z · LW · GW

I think this could be a fun project.

Besides IRL (which is hard to organize) I think other real time communication could be tried out as well. What do you think about the following options:

  • Traditional IRC
  • Google wave
  • Skype conference call
  • Realtime desktop sharing (e.g. mikogo up to 10 participants.)

Does anyone know a good IRC infrastructure that allows for quickly entering and displaying TeX formulas?

Comment by Christian_Szegedy on Less Wrong Book Club and Study Group · 2010-06-09T21:15:24.707Z · LW · GW

I've already read the book (the published paper version) without solving the exercises.

I'd be interested in participating in a technical discussion. Maybe (but not very probably) even IRL (Bay Area).

Comment by Christian_Szegedy on Bayes' Theorem Illustrated (My Way) · 2010-06-04T19:45:48.870Z · LW · GW

I can attest the being Christian itself does not seem to make a negative difference. :D

Comment by Christian_Szegedy on Bayes' Theorem Illustrated (My Way) · 2010-06-04T18:03:52.563Z · LW · GW

For me, the eye opener was this outstanding paper by E.T. Jaynes:

http://bayes.wustl.edu/etj/articles/well.pdf

IMO this describes the essence of the difference between the Bayesian and frequentist philosophy way better than any amount of colorful polygons. ;)

Comment by Christian_Szegedy on Bayes' Theorem Illustrated (My Way) · 2010-06-04T00:09:44.341Z · LW · GW

I think the only reasonable interpretation of the text is clear since otherwise other standard problems would be ambiguous as well:

"What is probability that a person's random coin toss is tails?"

It does not matter whether you get the information from an experimenter by asking "Tell me the result of your flip!" or "Did you get tails?". You just have to stick to the original text (tails) when you evaluate the answer in either case.

[[EDIT] I think I misinterpreted your comment. I agree that Daniel's introduction was ambiguous for the reasons you have given.

Still the wording "I have two children, and at least one of them is a boy-born-on-a-Tuesday." he has given clarifies it (and makes it well defined under the standard assumptions of indifference).

Comment by Christian_Szegedy on Cultivating our own gardens · 2010-06-02T22:53:27.653Z · LW · GW

There are optimization problems where a bottom-up approach works well, but sometimes top-down or in most cases not so easily labeled methods are necessary.

If mathematical optimization is a proper analogy (or even framework) for solving social/ethics etc. problems, then the logical conclusion would be: The approach must depend heavily on the nature of the problem at hand.

Locality has its very important place, but I can't see how one could address planet-wide "tragedy of the commons"-type issues by purely local methods.

Comment by Christian_Szegedy on Open Thread: June 2010 · 2010-06-02T21:46:23.270Z · LW · GW

We already have these buttons on LessWrong... ;)