**john_baez**on Results from MIRI's December workshop · 2013-12-28T17:28:54.745Z · score: 9 (9 votes) · LW · GW

Thanks for writing this - I've added links in my article recommending that people read yours.

**john_baez**on Probability, knowledge, and meta-probability · 2013-09-15T11:08:12.873Z · score: 22 (17 votes) · LW · GW

Ordinary probability theory and expected utility are sufficient to handle this puzzle. You just have to calculate the expected utility of each strategy before choosing a strategy. In this puzzle a strategy is more complicated than simply putting some number of coins in the machine: it requires deciding what to do after each coin either succeeds or fails to succeed in releasing two coins.

In other words, a strategy is a choice of what you'll do at each point in the game tree - just like a strategy in chess.

We don't expect to do well at chess if we decide on a course of action that ignores our opponent's moves. Similarly, we shouldn't expect to do well in this probabilistic game if we only consider strategies that ignore what the machine does. If we consider *all* strategies, compute their expected utility based on the information we have, and choose the one that maximizes this, we'll do fine.

I'm saying essentially the same thing Jeremy Salwen said.

**john_baez**on Probability, knowledge, and meta-probability · 2013-09-15T10:58:56.948Z · score: 2 (2 votes) · LW · GW

Right: a game where you repeatedly put coins in a machine and decide whether or not to put in another based on what occurred is not a single 'event', so you can't sum up your information about it in just one probability.

**john_baez**on Einstein's Arrogance · 2013-09-09T04:59:37.699Z · score: 7 (7 votes) · LW · GW

Once you assume:

1) the equations describing gravity are invariant under all coordinate transformations,

2) energy-momentum is not locally created or destroyed,

3) the equations describing gravity involve only the flow of energy-momentum and the curvature of the spacetime metric (and not powers or products or derivatives of these),

4) the equations reduce to ordinary Newtonian gravity in a suitable limit,

then Einstein's equations for general relativity are the only possible choice... except for one adjustable parameter, the cosmological constant.

(First Einstein said this constant was nonzero, then he said that was the "biggest mistake in his life", and then it turned out he was right in the first place. It's not zero, it's roughly 0.0000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000001. So, a bit of waffling on this issue is understandable.)

It took Einstein about 10 years of hard work to figure this out, with a lot of help from a mathematician Marcel Grossman who taught him the required math. But by the time he talked to that reporter he knew this stuff. That's what gave him his confidence.

His assumptions 1)-4) could have been wrong, of course. But he was playing a strong hand of cards - and he knew it.

By the way, he did write a paper where he got the equations wrong and predicted a *wrong* value for the deflection of starlight by the Earth's gravitational field. But luckily he caught his mistake before the experiment was done. If he'd caught his mistake afterwards, lots of people would have thought he was just retroactively fudging his theory to fit the data.

**john_baez**on How valuable is it to learn math deeply? · 2013-09-03T04:13:55.296Z · score: 24 (26 votes) · LW · GW

I agree that math can teach all these lessons. It's best if math is taught in a way that encourages effort and persistence.

One problem with putting *too* much time into learning math deeply is that math is much more precise than most things in life. When you're good at math, with work you can usually become completely clear about what a question is asking and when you've got the right answer. In the rest of life this isn't true.

So, I've found that many mathematicians avoid thinking hard about ordinary life: the questions are imprecise and the answers may not be right. To them, mathematics serves as a *refuge* from real life.

I became very aware of this when I tried getting mathematicians interested in the Azimuth Project. They are often sympathetic but feel unable to handle the problems involved.

So, I'd say math should be done in conjunction with other 'vaguer' activities.

**john_baez**on Mathematicians and the Prevention of Recessions · 2013-05-25T00:05:40.563Z · score: 8 (8 votes) · LW · GW

As of December 31, 2012, the Treasury had received over $405 billion in total cash back on Troubled Assets Relief Program investments, equaling nearly 97 percent of the $418 billion disbursed under the program.

But TARP was just a small part of the whole picture. What concerns me is that there seem to have been somewhere between $1.2 trillion and $16 trillion in secret loans from the Fed to big financial institutions and other corporations. Even if they've been repaid, the low interest rates might represent a big transfer of wealth from the poor to the wealthy. And the fact that I'm seeing figures that differ by more than an order of magnitude is far from reassuring, too! The GAO report seems to be worth digging into. If not mathematicians, at least accountants could be helpful for things like this!

**john_baez**on Robustness of Cost-Effectiveness Estimates and Philanthropy · 2013-05-24T18:20:13.268Z · score: 6 (6 votes) · LW · GW

Very nice article!

I too wonder exactly what you mean by

effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact.

Which kinds of qualitative analysis do you think are important, and why? Is that what you're talking about when you later write this:

Estimating the cost-effectiveness of health interventions in the developing world has proved to be exceedingly difficult, and this in favor of giving more weight to inputs for which it’s possible to make relatively well-grounded assessments. Some of these are room for more funding, the quality of the people behind a project and historical precedent.

?

I also have a question. Did you spend time looking for ways in which projects could be *more* effective than initially expected, or only ways in which they could be *less* effective. For example: did you think much about the 'multiplier effects' where making someone healthier made them better able to earn a living, support their relatives, and help other people... thus making other people healthier as well?

Even if your only ultimate concern were saving lives - which seems narrow-minded to me, and also a bit vague since all these people eventually die - it seems effects like this tend to turn *other* good things into extra lives saved.

It could be very hard to quantify these multiplier effects. But just as you'll find many negative feedbacks if you look hard for them, like these:

Fathers may steal nets from pregnant mothers and sell them for a profit.

LLIN recipients may use the nets for fishing.

LLIN users may not fasten LLINs properly.

Mosquitoes may develop biological resistance to the insecticide used on LLINs.

there could also be many *positive* feedbacks you'd find if you'd looked for *those*. So I'm a bit concerned that you're listing lots of "low-probability failure modes" but no "low-probability better-success-than-expected modes".

**john_baez**on A History of Bayes' Theorem · 2011-10-03T10:33:58.142Z · score: 8 (10 votes) · LW · GW

Maybe this is not news to people here, but in England, a judge has ruled against using Bayes' Theorem in court - unless the underlying statistics are "firm", whatever that means.

**john_baez**on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-21T06:31:48.447Z · score: 5 (7 votes) · LW · GW

I studied particle physics for a couple of decades, and I would not worry much about "mirror matter objects". Mirror matter is just of many possibilities that physicists have dreamt up: there's no good evidence that it exists. Yes, maybe every known particle has an unseen "mirror partner" that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets - more theoretical possibilities not backed up by any good evidence. True, mirror matter is one of many speculative hypotheses that people have invoked to explain some peculiarities of the Tunguska event, but I'd say a comet was a lot more plausible.

Asteroid collisions, on the other hand, are known to have happened and to have caused devastating effects. NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 out of a million. They estimate that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.

On the bright side, even if it hits us, Apophis will probably just cause local damage. The asteroid that hit the Earth in Chicxulub and killed off the dinosaurs released an energy comparable to a 240,000-megatonne bomb. That's the kind of thing that really ruins *everyone's* day.

**john_baez**on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-19T05:38:23.214Z · score: 14 (14 votes) · LW · GW

For a related blog post see Bayesian Computations of Expected Utility over on Azimuth.

**john_baez**on John Baez Interviews with Eliezer (Parts 2 and 3) · 2011-03-30T00:48:17.096Z · score: 3 (7 votes) · LW · GW

If you make choices consistently, you are maximizing the expected value of some function, which we call "utility".

Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don't have time to list them), in a situation where we don't have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don't apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the set.

This is especially the case for anyone changing their career.

I agree that my remark about risk aversion was poorly stated. What I meant is that if I have a choice either to do something that has a very tiny chance of having a very large good effect (e.g., working on friendly AI and possibly preventing a hostile takeover of the world by nasty AI) or to do something with a high chance of having a small good effect (e.g., teaching math to university students), I may take the latter option where others may take the former. Neither need be irrational.

**john_baez**on John Baez Interviews with Eliezer (Parts 2 and 3) · 2011-03-30T00:24:18.269Z · score: 24 (24 votes) · LW · GW

Baez replies with "Ably argued!" and presumably returns to his daily pursuits.

Please don't assume that this interview with Yudkowsky, or indeed any of the interviews I'm carrying out on *This Week's Finds*, are having no effect on my activity. Last summer I decided to quit my "daily pursuits" (n-category theory, quantum gravity and the like), and I began interviewing people to help figure out my new life. I interviewed Yudkowsky last fall, and it helped me decide that I should not be doing "environmentalism" in the customary sense. It only makes sense for me to be doing something new and different. But since I've spent 40 years getting good at math, it should involve math.

If you look at what I've been doing since then, you'll see that it's new and different, and it involves math. If you're curious about what it actually *is*, or whether it's something worth doing, please ask over at week313. I'm actually a bit disappointed at how little discussion of these issues has occurred so far in that conversation.

**john_baez**on [LINK] John Baez Interview with astrophysicist Gregory Benford · 2011-03-04T04:53:25.707Z · score: 3 (5 votes) · LW · GW

Since XiXiDu also asked this question on my blog, I answered over there.

I tell you that

allyou have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AIismuch more important than climate change, are you going to take the time and do it?

I have read most of those things, and indeed I've been interested in AI and the possibility of a singularity at least since college (say, 1980). That's why I interviewed Yudkowsky.

**john_baez**on [LINK] John Baez Interview with astrophysicist Gregory Benford · 2011-03-03T01:18:42.641Z · score: 1 (3 votes) · LW · GW

In my interview of Gregory Benford I wrote:

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has

gotto be more money inbothboxes than in justoneof them!

It sounds like you find the second argument so unconvincing that you don't see why people consider it a paradox.

For what it's worth, I'd take only one box.

**john_baez**on [LINK] John Baez Interview with astrophysicist Gregory Benford · 2011-03-03T00:30:34.328Z · score: 12 (14 votes) · LW · GW

XiXiDu wrote:

I really hope that John Baez is going to explain himself and argue for why he is more concerned with global warming than risks from AI.

Since I was interviewing Yudkowsky rather than the other way around, I didn't explain my views - I was getting him to explain his. But the last part of this interview will touch on global warming, and if you want to ask me questions, that would be a great time to do it.

(Week 311 is just the first part of a multi-part interview.)

For now, you might be interested to read about Gregory Benford's assessment of the short-term future, which somewhat resembles my own.

Tim Tyler wrote:

It looks like a conventional "confused environmentalist" prioritisation to me.

I'm probably confused (who isn't?), but I doubt I'm conventional. If I were, I probably wouldn't be so eager to solicit the views of Benford, Yudkowsky and Drexler on my blog. A big problem is that different communities of intelligent people have very different views on which threats and opportunities are most important, and these communities don't talk to each other enough and think clearly enough to come to agreement, even on factual issues. I'd like to make a dent in that problem.

The list you cite is not the explanation that XiXiDu seeks.

**john_baez**on A Fundamental Question of Group Rationality · 2010-10-14T11:44:18.313Z · score: 3 (3 votes) · LW · GW

What do you believe because others believe it, even though your own evidence and reasoning ("impressions") point the other way?

I don't know. If I did, I'd probably try to do something about it. But my subconscious mind seems to have prevented me from noticing examples. I don't doubt that they *exist*, lurking behind the blind spot of self-delusion. But I can't name a single one.

Feynman: "The first principle is that you must not fool yourself - and you are the easiest person to fool."

**john_baez**on Great Mathematicians on Math Competitions and "Genius" · 2010-10-14T05:52:14.916Z · score: 12 (12 votes) · LW · GW

Thurston said:

Quickness is helpful in mathematics, but it is only one of the qualities which is helpful.

Gowers said:

The most profound contributions to mathematics are often made by tortoises rather than hares.

Gelfand said it in a more funny way:

You have to be fast only to catch fleas.

**john_baez**on Vanity and Ambition in Mathematics · 2010-10-14T05:39:07.772Z · score: 10 (10 votes) · LW · GW

It's a bit easier in math than other subjects to know when you're right and when you're not. That makes it a bit easier to know when you understand something and when you don't. And then it quickly becomes clear that *pretending* to understand something is counterproductive. It's much better to know and admit exactly how much you understand.

And the best mathematicians can be real masters of "not understanding". Even when they've reached the shallow or rote level of understanding that most of us consider "understanding", they are dissatisfied and say they don't understand - because they know the feeling of *deep* understanding, and they aren't content until they get that.

Gelfand was a great Russian mathematician who ran a seminar in Moscow for many years. Here's a little quote from Simon Gindikin about Gelfand's seminar, and Gelfand's gift for "not understanding":

One cannot avoid mentioning that the general attitude to the seminar was far from unanimous. Criticism mainly concerned its style, which was rather unusual for a scientific seminar. It was a kind of a theater with a unique stage director playing the leading role in the performance and organizing the supporting cast, most of whom had the highest qualifications. I use this metaphor with the utmost seriousness, without any intention to mean that the seminar was some sort of a spectacle.

Gelfand had chosen the hardest and most dangerous genre: to demonstrate in public how he understood mathematics. It was an open lesson in the grasping of mathematics by one of the most amazing mathematicians of our time.This role could be only be played under the most favorable conditions: the genre dictates the rules of the game, which are not always very convenient for the listeners. This means, for example, that the leader follows only his own intuition in the final choice of the topics of the talks, interrupts them with comments and questions (a privilege not granted to other participants) [....] All this is done with extraordinary generosity, a true passion for mathematics.Let me recall some of the stage director's strategems. An important feature were improvisations of various kinds. The course of the seminar could change dramatically at any moment. Another important

mise en sceneinvolved the "trial listener" game, in which one of the participants (this could be a student as well as a professor) was instructed to keep informing the seminar of his understanding of the talk, and whenever that information was negative, that part of the report would be repeated. A well-qualified trial listener could usually feel when the head of the seminar wanted an occasion for such a repetition.Also, Gelfand himself had the faculty of being "unable to understand" in situations when everyone around was sure that everything is clear. What extraordinary vistas were opened to the listeners, and sometimes even to the mathematician giving the talk, by this ability not to understand.Gelfand liked that old story of the professor complaining about his students: "Fantastically stupid students - five times I repeat proof, already I understand it myself, and still they don't get it."

**john_baez**on Vanity and Ambition in Mathematics · 2010-10-14T05:12:02.658Z · score: 6 (6 votes) · LW · GW

The author of this post pointed out that he said "t's noticeably less common for mathematicians *of the highest caliber* to engage in status games than members of the general population do." Somehow I hadn't noticed that.

I'm not sure how this affects my reaction, but I wouldn't have written quite what I wrote if I'd noticed that qualifier.

**john_baez**on Vanity and Ambition in Mathematics · 2010-10-14T02:34:41.114Z · score: 14 (14 votes) · LW · GW

In my 25 years of being a professional mathematician I've found many (though certainly not all) mathematicians to be acutely aware of status, particularly those who work at high-status institutions. If you are a research mathematician your job is to be smart. To get a good job, you need to convince other people that you are smart. So, there is quite a well-developed "pecking order" in mathematics.

I believe the appearance of "humility" in the quotes here arises not from lack of concern with status, but rather various other factors:

1) Most of us know that there are mathematicians much better than us: mathematicians who could, with their little pinkie on a lazy Sunday afternoon, accomplish deeds that we might struggle vainly for years to achieve.

2) Many of us realize that it's wiser to emphasize our shortcomings than boast of our accomplishments.

By the way: people quoted in this article are all extremely high in status, and indeed it's mostly such mathematicians who wind up talking about themselves publicly, answering questions like "Can you remember when and how you became aware of your exceptional mathematical talent?" Every mathematician worth his or her salt knows of Hironaka, Langlands, Gromov, Thurston and Grothendieck. So these are not typical mathematicians: they are our heroes, our gods.

It is nice having humble gods. But still, they're not stupid: they know they're our gods.

**john_baez**on The Importance of Self-Doubt · 2010-08-20T11:02:50.075Z · score: 5 (7 votes) · LW · GW

It's some sort of mutant version of "just because you're paranoid doesn't mean they're not out to get you".

**john_baez**on Existential Risk and Public Relations · 2010-08-19T07:58:44.784Z · score: 15 (15 votes) · LW · GW

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

**john_baez**on The Least Convenient Possible World · 2010-05-03T00:03:38.484Z · score: 6 (6 votes) · LW · GW

Or: it says "This is undecidable in Zermelo-Fraenkel set theory plus the axiom of choice". In the case of P=NP, I might believe it

Ask again, with another famously unsolved math problem. Repeat until it stops saying that or you run out of problems you know.

I would not believe a purported god if it said all 9 remaining Clay math prize problems are undecidable.

**john_baez**on The Least Convenient Possible World · 2010-05-02T23:59:27.110Z · score: 17 (19 votes) · LW · GW

Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.

So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility. Why not?

While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle.

Why not?

It seems like you have some pre-established moral principles which you are using in your arguments against utilitarianism. Right?

I don't see how you can compromise on these principles. Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership.

To me it seems that most people making difficult moral decisions make complicated compromises between competing principles.