JohnBuridan's Shortform 2020-07-19T03:26:04.065Z
DARPA Digital Tutor: Four Months to Total Technical Expertise? 2020-07-06T23:34:21.089Z
Quick Look #1 Diophantus of Alexandria 2020-06-24T19:12:14.672Z
What is a Rationality Meetup? 2019-12-31T16:15:34.033Z
Preserving Practical Science (Phronesis) 2019-11-28T22:37:21.591Z
Saint Louis Junto Meeting 11/23/2019 2019-11-24T21:36:20.093Z
Junto: Questions for Meetups and Rando Convos 2019-11-20T22:11:55.350Z
Meetup 2019-11-20T21:21:50.889Z
Welcome to Saint Louis Junto [Edit With Your Details] 2019-11-20T21:19:38.632Z
Prediction Markets Don't Reveal The Territory 2019-10-12T23:54:23.693Z
10 Good Things about Antifragile: a positivist book review 2019-04-27T18:22:16.665Z
No Safe AI and Creating Optionality 2019-04-17T14:08:41.843Z
Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall 2019-02-23T17:48:46.837Z
"The Unbiased Map" 2019-01-27T19:08:10.051Z
We Agree: Speeches All Around! 2018-06-14T17:53:23.918Z


Comment by JohnBuridan on St. Louis, MO – ACX Meetups Everywhere 2021 · 2021-09-25T19:57:41.029Z · LW · GW

Here's the updated location:

Comment by JohnBuridan on St. Louis, MO – ACX Meetups Everywhere 2021 · 2021-09-25T18:17:01.956Z · LW · GW

Hi all,

If you would like to order food pick an item from here for today.

Comment by JohnBuridan on St. Louis, MO – ACX Meetups Everywhere 2021 · 2021-08-29T14:22:24.190Z · LW · GW

The new location in the park is the Gus Fogt picnic benches. ///crazy.heat.costs

Comment by JohnBuridan on DARPA Digital Tutor: Four Months to Total Technical Expertise? · 2021-07-15T16:42:56.524Z · LW · GW

I understood the problems to be of the sort one would learn in becoming whatever the Navy ship equivalent is to CISCO Administrator. It really seems to have been a training program for all the sys admin IT needs of a Navy ship.

Here are some examples of problems they had to solve:

Establish a fault-tolerant Windows domain called to support the Operation. 

Install and configure an Exchange server for 

Establish Internet access for all internal client machines and servers. 

Comment by JohnBuridan on The topic is not the content · 2021-07-15T16:20:20.863Z · LW · GW

The importance of emphasizing "activity" versus "topic" or "industry" is very important when considering jobs. However, in advising students and talking with many people, I find that a lot of people genuinely are motivated by just being in a certain industry. It makes them happy to know they are a part of an industry they love, even if the job activities are not very good for them or especially rewarding. That cost is offset by the benefit of being in "the industry" or sometimes by having coworkers they like.

The career guide book "What Color is Your Parachute" makes these distinctions and is superbly useful at helping you determine your indifference curves.

Comment by JohnBuridan on What books are for: a response to "Why books don't work." · 2021-04-22T14:36:38.776Z · LW · GW

My essay How to Read a Book for Understanding was 90% motivated by annoyance at Andy's misunderstanding of how books are read differently by different types of readers. I never reformatted this for Less Wrong, but probably should.

I definitely agree though with your point about salience. I think it is an important though inadequate defense of books. Salience can be achieved in many more sophisticated ways than reading, even some YouTube videos create salience on surprisingly complex topics. Books offer more than just this as a medium.

Comment by JohnBuridan on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-31T11:38:37.378Z · LW · GW

When I heard this, I was simply confused at how unexamined Matt's position was.

The idea of being an expert on ethics is something the Rationalist is community quite familiar with. Effective Altruism, in general, assumes that an EA mode of moral behavior is in fact something one can develop an expertise in. Perhaps you have different values than EA, but even so, whatever your values, there can be more or less effective ways of achieving them, an expert is just someone who has mapped the tensions and conflicts and contradictions involved in thinking about the territory clearly.

Comment by JohnBuridan on Thirty-three randomly selected bioethics papers · 2021-03-24T01:44:23.880Z · LW · GW

I have a practicing bioethical consultant on my team, and I have very much realized that the rationalsphere is unduly prejudiced against the field. This paper set confirms that for me, since my reading in the field is due entirely to selection bias.

Bioethics is, in my opinion, healthier than philosophy in that it more often requires coming to a decision on a current moral question.

Notice, however, that Bioethics papers will skew in a way that bioethical consultants will not. In general people in the field have an additional specialty/ practice like law, clinical research, hospital management, drug research, social work, psychology, and, of course, academia. I think this diversity of professions with actual jobs to perform, makes the field more healthy (but perhaps less coherent) than Eliezer and Alex Tabarrok realize.

A higher number of these papers are at least on interesting and consequential questions, even if the authors fumble, than one finds in philosophy.

Comment by JohnBuridan on JohnBuridan's Shortform · 2021-02-16T01:24:06.936Z · LW · GW

Thinking out loud here about inference.

Darwin's original theory relied on three facts: overproduction of offspring, variability of offspring, and inheritance of traits. These facts were used to formulate a mechanism: the offspring best adapted to the environment for reproduction would, on average, displace population of those less so adapted. Overproduction ensured that there was selection pressure, or at least group stasis on average (and not dysgenics), variability allowed for positive mutations, heritability allowed for persistence. Call it natural selection for short.

What I'm interested in the mistake Darwin made in his next step. He assumes that because the process of natural selection tends on average towards fitness the evolution of species can only be a super imperceptible gradual process.

This is incorrect: evolution can happen alarmingly fast.

What I'm interested in why Darwin thought this. And whether the error is general enough that we can learn something about inferential reasoning that would apply to other cases. At the time geologists disagreed about the rate of major geological transitions in earth's history. Darwin through himself entirely behind Charles Lyell's slow change puritanism. What to give Darwin credit he thought this had to be the way it was because his law of averages requires a law of large numbers, and you can't get large numbers of populations without any immense number of years.

I think the big mistake Darwin made was placing too high a prior upon gradual change, even though he knew there was insufficient evidence for gradual change based on the geological record. His explanation for this lack of evidence was that the evidence had been destroyed for the most part through time, that the geological record we had was a tiny fragment of an immense story which we only can pick the pieces up from. "Absence of evidence is not evidence of absence."

But he should have mapped out the other hypothetical world too, even if just a bit. For Darwin's theory of evolution is not in fact dependent on gradual change, but can accommodate times of stasis followed by times of chaos and development.

To me the lesson is to be as clear as possible about what aspects of your model are essential and which are reasonable extensions.

Comment by JohnBuridan on Is Success the Enemy of Freedom? (Full) · 2020-11-18T02:26:45.387Z · LW · GW

And freedom is a terrible master. I was far more free from college to college + 3 years, but freedom is something you spend. It's a currency which you exchange for some type of life. Now I have very little slack, but I have an endless supply of good places to devote my energy. And that's freedom to do good, the highest form of freedom.

Comment by JohnBuridan on Is Success the Enemy of Freedom? (Full) · 2020-11-18T02:19:31.297Z · LW · GW

I play StarCraft 1 month a year, and it's true, I stick with Protoss. Although now that you mention it, next time I play I'll play Terran to see what happens...

But I also learn bits of languages frequently and maintain 2 foreign languages, and although there is always some switching cost with languages, it's not competitive and so the costs to switching are low.

Comment by JohnBuridan on Is Success the Enemy of Freedom? (Full) · 2020-11-18T02:12:31.205Z · LW · GW

I want to keep being successful despite the costs to my freedom, but that's because I view my success as a service (hence I get paid for it), not as a source of my own happiness. Esse quam videri.

Comment by JohnBuridan on Can we hold intellectuals to similar public standards as athletes? · 2020-10-09T02:51:10.237Z · LW · GW

Here is a quick list of things that spring to mind when I evaluate intellectuals. Any score does not necessarily need to cash out in a ranking. There are different types of intellectuals that play different purposes in the tapestry of the life of the mind. 

How specialized is this person's knowledge?
What are the areas outside of specialization that this person has above average knowledge about?
How good is this person at writing/arguing/debating in favor of their own case?
How good is this person at characterizing the case of other people?
What are this person's biggest weaknesses both personally and intellectually?

Comment by JohnBuridan on JohnBuridan's Shortform · 2020-07-19T03:26:04.453Z · LW · GW

Just a reminder to self that I wrote this, but need to write a counterargument to it based upon a new insight about what a good "popular book" can do.

Comment by JohnBuridan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-12T12:11:07.070Z · LW · GW

Ah! Thanks so much. I was definitely conflating farsightedness as discount factor and farsightedness as vision of possible states in a landscape.

And that is why some resource increasing state may be too far out of the way, meaning NOT instrumentally convergent, - because the more distant that state is the closer its value is to zero, until it actually is zero. Hence the bracket.

Comment by JohnBuridan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-12T00:43:01.945Z · LW · GW

You say:

"most agents stay alive in Pac-Man and postpone ending a Tic-Tac-Toe game", but only in the limit of farsightedness (γ→1)

I think there are two separable concepts at work in these examples, the success of an agent and the agent's choices as determined by the reward functions and farsightedness.

If we compare two agents, one with the limit of farsightedness and the other with half that, farsightedness (γ→1/2), then I expect the first agent to be more successful across a uniform distribution of reward functions and to skip over doing things like Trade School, but the second agent in light of more limited farsightedness would be more successful if it were seeking power. As Vanessa Kosoy said above,

... gaining is more robust to inaccuracies of the model or changes in the circumstances than pursuing more "direct" paths to objectives.

What I meant originally is that if an agent doesn't know if γ→1, then is it not true that an agent "seeks out the states in the future with the most resources or power? Now, certainly the agent can get stuck at a local maximum because of shortsightedness, and an agent can forgo certain options as result of its farsightedness.

So I am interpreting the theorem like so:

An agent seeks out states in the future that have more power at the limit of its farsightedness, but not states that, while they have more power, are below its farsightedness "rating."

Note: Assuming a uniform reward function.

Comment by JohnBuridan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-06T21:01:35.067Z · LW · GW

If an agent is randomly placed in a given distribution of randomly connected points, I see why there are diminishing returns on seeking more power, but that return is never 0, is it?

This gives me pause.

Comment by JohnBuridan on The Gears of Impact · 2020-04-30T15:13:50.510Z · LW · GW

You said, "Once promoted to your attention, you notice that the new plan isn't so much worse after all. The impact vanishes." Just to clarify, you mean that the negative impact of the original plan falling through vanishes, right?

When I think about the difference between value impact and objective impact, I keep getting confused.

Is money a type of AU? Money both functions as a resource for trading up (personal realization of goals) AND as a value itself (for example when it is an asset).

If this is the case, then any form of value based upon optionality violates the "No 'coulds' rule," doesn't it?

For example, imagine I have a choice between hosting a rationalist meetup and going on a long bike ride. There's a 50/50 chance of me doing either of those. Then something happens which removes one of those options (say a pandemic sweeps the country or something like that). If I'm interpreting this right, then the loss of the option has some personal impact, but zero objective impact.

Is that right?

Let's say an agent works in a low paying job that has a lot of positive impact for her clients - both by helping them attain their values and helping them increase resources for the world. Does the job have high objective impact and low personal impact? Is the agent in bad equilibrium when achievable objective impact mugs her of personal value realization?

Let's take your examples of sad person with (P, EU):

Mope and watch Netflix (.90, 1). Text ex (.06, -500). Work (.04, 10). If suddenly one of these options disappeared is that a big deal? Behind my question is the worry that we are missing something about impact being exploited by one of the two terms which compose it and about whether agents in this framework get stuck in strange equilibria because of the way probabilities change based on time.

Help would be appreciated.

Comment by JohnBuridan on An Orthodox Case Against Utility Functions · 2020-04-18T15:19:27.668Z · LW · GW

Thank you for this.

Your characterization of Reductive Utility matches very well my own experience in philosophical discussion about utilitarianism. Most of my interlocutors object that I am proposing a reductive utility notion which suffers from incomputability (which is essentially how Anscombe dismissed it all in one paragraph, putting generations of philosophers pitted eternally against any form of consequentialism).

However, I always thought it was obvious that one need not believe that objects and moral thinking must be derived from ever lower levels of world states.

What do you think are the downstream effects of holding Reductive Utility Function theory?

I'm thinking the social effects of RUF is more compartmentalization of domains because from an agent perspective their continuity is incomputable, does that make sense?

Comment by JohnBuridan on mingyuan's Shortform · 2019-12-20T19:45:20.596Z · LW · GW

I think about the mystery of spelling a lot. Part of it is that English is difficult, of course. But still why does my friend who reads several long books a year fail so badly at spelling, as he has always struggled since 2nd and 3rd grade when his mom would take extra time out just to ensure that he learned his spelling words enough to pass.

I have never really had a problem with spelling and seem to use many methods when I am thinking about spelling explicitly, sound it out, picture it, remember it as a chunk, recall the language of origin to figure out dipthongs. I notice that students who are bad at spelling frequently have trouble learning foreign languages, maybe the correlation points to a common cause?

Comment by JohnBuridan on Causal Abstraction Intro · 2019-12-20T18:19:49.029Z · LW · GW

Strongly agree causal models need lots of visuals. I liked the video, but I also realize I understood it because I know what Counterfactuals and Causal Inference is already. I think that is actually a fair assumption given your audience and the goals of this sequence. Nonetheless, I think you should provide some links to required background information.

I am not familiar with circuits or fluid dynamics so those examples weren't especially elucidating to me. But I think as long as a reader understands one or two of your examples it is fine. Part of making this judgment depends upon your own personal intuition about how labor should be divided between author and reader. I am fine with high labor, and making a video is, IMO, already quite difficult.

I think you should keep experimenting with the medium.

Comment by JohnBuridan on We run the Center for Applied Rationality, AMA · 2019-12-20T17:55:45.161Z · LW · GW

What have you learned about transfer in your experience at CFAR? Have you seen people gain the ability to transfer the methods of one domain into other domains? How do you make transfer more likely to occur?

Comment by JohnBuridan on We run the Center for Applied Rationality, AMA · 2019-12-20T17:49:19.700Z · LW · GW

I'm sure the methods of CFAR have wider application than to Machine Learning...

Comment by JohnBuridan on The Tails Coming Apart As Metaphor For Life · 2019-12-20T16:54:47.820Z · LW · GW

“The Tails Coming Apart as a Metaphor for Life” should be retitled “The Tails Coming Apart as a Metaphor for Earth since 1800.” Scott does three things, 1) he notices that happiness research is framing dependent, 2) he notices that happiness is a human level term, but not specific at the extremes, 3) he considers how this relates to deep seated divergences in moral intuitions becoming ever more apparent in our world.

He hints at why moral divergence occurs with his examples. His extreme case of hedonic utilitarianism, converting the entire mass of the universe into nervous tissue experiencing raw euphoria, represents a ludicrous extension of the realm of the possible: wireheading, methadone, subverting factory farming. Each of these is dependent upon technology and modern economies, and presents real ethical questions. None of these were live issues for people hundreds of years ago. The tails of their rival moralities didn’t come apart – or at least not very often or in fundamental ways. Back then Jesuits and Confucians could meet in China and agree on something like the “nature of the prudent man.” But in the words of Lonergan that version of the prudent man, Prudent Man 1.0, is obsolete: “We do not trust the prudent man’s memory but keep files and records and develop systems of information retrieval. We do not trust the prudent man’s ingenuity but call in efficiency experts or set problems for operations research. We do not trust the prudent man’s judgment but employ computers to forecast demand,” and he goes on. For from the moment VisiCalc primed the world for a future of data aggregation, Prudent Man 1.0 has been hiding in the bathroom bewildered by modern business efficiency and moon landings.

Let’s take Scott’s analogy of the Bay Area Transit system entirely literally, and ask the mathematical question: when do parallel lines come apart or converge? Recall Euclid’s Fifth Postulate, the one saying that parallel lines will never intersect. For almost a couple thousand years no one could figure out why it was true. But it wasn’t true, and it wasn’t false. Parallel lines come apart or converge in most spaces. Only, alas, only on a flat plane in a regular Euclidean space ℝ3 do they obey Euclid’s Fifth and stay equidistant.

So what is happening when the tails come apart in morality? Even simple technologies extend our capacities, and each technological step extending the reach of rational consciousness into the world transforms the shape of the moral landscape which we get to engage with. Technological progress requires new norms to develop around it. And so the normative rules of a 16th century Scottish barony don’t become false; they become incomparable.

Now the Fifth postulate was true in a local sense, being useful for building roads, cathedrals, and monuments. And reflective, conventional morality continues to be useful and of inestimable importance for dealing with work, friends, and community. However, it becomes the wrong tool to use when considering technology laws, factory farming, or existential risk. We have to develop new tools.

Scott’s concludes that we have mercifully stayed within the bounds where we are able to correlate the contents of rival moral ideas. But I think it likely that this is getting harder and harder to do each decade. Maybe we need another, albeit very different, Bernard Riemann to develop a new math demonstrating how to navigate all the wild moral landscapes we will come into contact with.

We Human-Sized Creatures access a surprisingly detailed reality with our superhuman tools. May the next decade be a good adventure, filled with new insights, syntheses, and surprising convergences in the tails.

Comment by JohnBuridan on Voluntourism · 2019-12-17T03:12:14.026Z · LW · GW

Well said, but there are some things I think must be added. I think it is right to compare voluntourism to regular tourism and to judge it on its goal of increasing "local" cooperation. By your account, voluntourism should have the twin effects of increasing GDP (or the general success and efficient cooperation) of the members of the church group by a few percentage points and increasing the level of donations over many years to the voluntoured location.

When doing the math to calculate the cost-benefit analysis of these voluntourism projects we should actually write off the cost of travel because in our "voluntourism" model, we assume the travel was going to happen anyway. If that's the case, then voluntourism is almost by definition a net-positive. So I agree we shouldn't be too negative about it.

Nonetheless, I don't think we should call voluntourism effective altruism. For something to be called effectively altruistic, we should be forced to take into account the costs of the program, and the cost of a week and a half in Haiti is $2000 per person. If we assume that a person experiences a financial gain of 2% per year because of the increased group cohesion in the States, that person would have to be making 100k per year to recoup the loss compared to inflation. If the person makes more money than that and donates additional gains to the poor of Haiti, then that pays off for both him and the people of Haiti positively.

I think under these assumptions, voluntourism only reaches the threshold of being effective, when very rich people are doing it. When you are giving tens to hundreds of thousands of dollars away per year anyway, voluntouring does not make a big percentage difference to your budget, but will likely help you give to more effective causes.

Comment by JohnBuridan on Tabletop Role Playing Game or interactive stories for my daughter · 2019-12-16T15:57:22.505Z · LW · GW

A friend gave me Kingdom a few years ago, and I thought it was quite good for just this purpose. He used it for his kids; I'm planning on using it for mine and to generate stories for them.

I like that it is a roleplaying system in which mind modelling takes a high priority. It is story-driven, communication oriented, and based upon having a community. I really like it. Here are some examples of play. There are no dice.

Comment by JohnBuridan on Is Rationalist Self-Improvement Real? · 2019-12-15T04:26:59.954Z · LW · GW

There are so many arguments trying to be had at once here that it's making my head spin.

Here's one. What do we mean by self-help?

I think by self-help Scott is thinking about becoming psychologically a well-adjusted person. But what I think Jacobian means by "rationalist self-help" is coming to a gears level understanding of how the world works to aid in becoming well-adapted. So while Scott is right that we shouldn't expect rationalist self-help to be significantly better than other self-help techniques for becoming a well-adjusted person, Jacobian is right that rationalist self-help is an attempt to become both a well-adjusted person AND a person who participates in developing an understanding of how the world works.

So perhaps you want to learn how to navigate the space of relationships, but you also have this added constraint that you want the theory of how to navigate relationships to be part of a larger understanding of the universe, and not just some hanging chad of random methods without satisfactory explanations of how or why they work. That is to say, you are not willing to settle for unexamined common sense. If that is the case, then rationalist self-help is useful in a way that standard self-help is not.

A little addendum. This is not a new idea. Socrates thought of philosophy as way of life, and tried to found a philosophy which would not only help people discover more truths, but also make them better, braver, and more just people. Stoics and Epicureans continued the tradition of philosophy as a way of life. Since then, there have always been people who have made a way of life out of applying the methods of rationality to normal human endeavors, and human society since then has pretty much always been complicated enough to marginally reward them for the effort.

Comment by JohnBuridan on Conversational Cultures: Combat vs Nurture (V2) · 2019-12-14T23:22:47.774Z · LW · GW

In the Less Wrong community, Anti-Nurture comments are afraid of laxity with respect to the mission, while Anti-Combat commenters are afraid of a narrow dogmatism infecting the community.

Comment by JohnBuridan on Conversational Cultures: Combat vs Nurture (V2) · 2019-12-14T23:19:52.836Z · LW · GW

I read this post when it initially came out. It resonated with me to such an extent that even three weeks ago, I found myself referencing it when counseling a colleague on how to deal with a student whose heterodoxy caused the colleague to make isolated demands for rigor from this student.

The author’s argument that Nurture Culture should be the default still resonates with me, but I think there are important amendments and caveats that should be made. The author said:

"To a fair extent, it doesn’t even matter if you believe that someone is truly, deeply mistaken. It is important foremost that you validate them and their contribution, show that whatever they think, you still respect and welcome them."

There is an immense amount of truth in this. Concede what you can when you can. Find a way to validate the aspects of a person’s point which you can agree with, especially with the person you tend to disagree with most or are more likely to consider an airhead, adversary, or smart-aleck. This has led me to a great amount of success in my organization. As Robin Hanson once asked pointedly, “Do you want to appear revolutionary, or be revolutionary?” Esse quam videri.

Combat Culture is a purer form of Socratic Method. When we have a proposer and a skeptic, we can call this the adversarial division of labor: You propose the idea, I tell you why you are wrong. You rephrase your idea. Rinse and repeat until the conversation reaches one of three stopping points: aporia, agreement, or an agreement to disagree. In the Talmud example, both are proposers of a position and both are skeptics of the other persons’ interpretation.

Nurture Culture does not bypass the adversarial division of labor, but it does put constraints on it - and for good reason. A healthy combat culture can only exist when a set of rare conditions are met. Ruby’s follow-up post outlined those conditions. But Nurture Culture is how we still make progress despite real world conditions like needing consensus, or not everyone being equal in status or knowledge, or some people having more skin in the game than others.

So here are some important things I would add more emphasis to from the original article after about a hundred iterations of important disagreements at work and in intellectual pursuits since 2018.

1. Nurture Culture assumes knowledge and status asymmetries.

2. Nurture Culture demands a lot of personal patience.

3. Nurture Culture invites you to consider what even the most wrong have right.

4. Sometimes you can overcome a disagreement at the round table by talking to your chief adversary privately and reaching a consensus, then returning for the next meeting on the same page.

While these might be two cultures, it’s helpful to remember that there are cultures to either side of these two: Inflexible Orthodoxy and Milquetoast Relativism. A Combat Culture can resolve into a dominant force with weaponized arguments owning the culture, while a Nurture Culture can become so limply committed to their nominal goals that no one speaks out against anything that runs counter to the mission.

Comment by JohnBuridan on Explaining why false ideas spread is more fun than why true ones do · 2019-11-28T19:09:09.316Z · LW · GW

I agree with your point that people frequently seem more interested in the spread of enemy ideas than their own. Only a lazy thinker would hold the opinion that bad ideas because of one cause. I saw a paper just yesterday detailing that advocates of religious terrorism tended to have at least some college. That research only just recently seems to have become mainstream. Why so? The causal mechanisms are not simple for single person.

On the other hand, for looking at a group, that doesn't mean the causal mechanism has to be MORE complicated. It might even be more simple. A single person can come to a belief for complex reasons, but perhaps the same belief can propagate through society for a simpler reason than the reason each individual comes to embracing it, such as, "it's included in every 9th grade textbook" or "it allows one to find a spouse more easily." Maybe this begs the question though? How did the idea get included in every textbook? Why does the idea allow one to find a spouse more easily?

Peter Adamson's History of Philosophy without Any Gaps does not discuss the history of how ideas have spread explicitly, but he does deal with the history of the interaction of major philosophical concepts and schools of thought. Beneath the content of the arguments is the story of the history of education, literacy, and evangelization. You pick up any history of Christianity, and it will still go into detail about how different tactics and strategies for the spread of the Christian idea: domestic proselytization, missionaries offering services to kings, centers of literacy, outreach to poor people, conversions of whole households, military force, etc. In some instances we even have the tactics recorded. The Jesuits sent thousands of letters to their superiors detailing their attempts to spread Christianity among Native Americans in the Great Lakes Region.

The Rationalsphere has not even begun to develop a method or programmatic approach to the spread of rationality.

Comment by JohnBuridan on The unexpected difficulty of comparing AlphaStar to humans · 2019-11-28T18:10:03.409Z · LW · GW

Here to say the same thing. Say I want to discover better strategies in SC2 using AlphaStar, it's extremely important that Alphastar be employing some arbitrarily low human achievable level of athleticism.

I was disappointed when the vs TLO videos came out that TLO thought he was playing against one agent AlphaStar. But in fact he played against five different agents which employed five different strategies, not a single agent which was adjusting and choosing among a broad range of strategies.

Comment by JohnBuridan on Prediction markets for internet points? · 2019-10-27T22:48:25.460Z · LW · GW

For subsidy creation would it work to have a market czar (I mean board of directors) who awards additional points for active participation in questions they are most interested in? I suppose you could also just have a timer which just subsidizes low activity markets to increase their activity, but maybe that would create too many externalities...

Comment by JohnBuridan on Prediction Markets Don't Reveal The Territory · 2019-10-18T21:10:43.809Z · LW · GW

Your points are well-taken. And thanks for pointing out the ambiguity about what problems can be overcome. I will clarify that to something more like "problems like x and y can be overcome by subsidizing markets and ensuring the right incentives are in place for the right types of information to be brought to light."

I had already retitled this section in my doc (much expanded and clarified) 'Do Prediction Markets Help Reveal The Map?' which is a much more exact title, I think.

I am curious about what you mean by create 'a collective map', if you mean achieve localized shared understanding of the world, individual fields of inquiry do it with some success. But if you mean to create collective knowledge broad enough that 95% of people share the same models of reality, you are right, forget it. There's just too much difference among the way communities think.

As for the 14th c. John Buridan, the interesting thing about him is that he refused to join one of the schools and instead remained an Arts Master all his life, specializing in philosophy and the application of logic to resolve endless disputes in different subjects. At the time people were expected to join one the religious orders and become a Doctor of Theology. He carved out a more neutral space away from those disputations and refined the use logic to tackle problems in natural philosophy and psychology.

Comment by JohnBuridan on Prediction Markets Don't Reveal The Territory · 2019-10-18T20:32:59.875Z · LW · GW

Good point. But it is not just a cost problem. My conjecture in the above comment is that conditional markets are more prone to market failure because the structure of conditional questions decreases the pool of people who can participate.

I need more examples of conditional markets in action to figure out what the greatest causes of market failure are for conditional markets.

Comment by JohnBuridan on Planned Power Outages · 2019-10-14T20:47:54.269Z · LW · GW

And yes, our society is woefully unprepared to go more than two hours without power. I really think we should be prepared for five days at all times (not that I am, but just saying). To prepare for such things would be massively expensive and radically change communities if they had to undergo regular stress tests lasting 12 hours or so.

Comment by JohnBuridan on Planned Power Outages · 2019-10-14T20:43:52.283Z · LW · GW

You will be disappointed to learn that the electric companies all around the United States have little incentive to care about their poles leading to residential areas, because those areas use half as much power as industrial customers. So outages of a few hours after every thunderstorm are pretty common in Midwestern cities.

Comment by JohnBuridan on Prediction Markets Don't Reveal The Territory · 2019-10-13T16:57:22.561Z · LW · GW

Very good and helpful! These strategies can make prediction markets *super effective*, however getting a working prediction market on conditional statements increases the difficulty of creating a sufficiently liquid market. There exists a difficult to resolve tension between optimizing for market efficiency and optimizing for "gear discovery."

People who want to use markets do need to be wary of this problem.

Comment by JohnBuridan on Prediction Markets Don't Reveal The Territory · 2019-10-13T13:20:21.395Z · LW · GW

Could you say more? Do you mean a prediction market can be on conditional statements?

Comment by JohnBuridan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-05T13:54:18.667Z · LW · GW

Yes, digital books offer far greater potential for visualization! Books do not offer a way to play with the inputs and outputs of models, and maybe one day even online academic papers will allow us to play with the data more. I look forward to the day when modeling tools are simple enough to use that even humanities people will have no problem creating them to illustrate a point. Although, Excel really is quite good!

Maybe it's part of their excitement for their own research that Andy claims books are a bad medium for learning.

Comment by JohnBuridan on 10 Good Things about Antifragile: a positivist book review · 2019-10-05T04:32:50.372Z · LW · GW

On schools: I agree that it would be interesting to have many types of schools. However, it doesn't seem like an ecosystem that could work. Schools don't have strong feedback mechanisms, and for there to be different types of schools there needs to be different types of communities that these schools are serving. The current model of schools is not merely a top-down imposition, but is the evolved form the school given a larger social system.

There can only be different types of schools, when we form communities with different sets of needs.

Charter, Private, and Public schools are usually very similar. Sometimes schools do something interesting with their curriculum... usually not though. I think the place to look for different types of education are Special Education, Gifted Education, Online Learning, Homeschooling, Boarding School, and Hybrid schools. Each of these form themselves to serve a particular kind of people.

Comment by JohnBuridan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-05T04:14:37.632Z · LW · GW


Comment by JohnBuridan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-04T13:08:19.976Z · LW · GW

I like Andy's idea that there is a whole world of mnemonic products which have yet to be explored. And I am glad to see the insight on what is wrong with the emotional story telling of standard MOOCs. There is definitely work in the area of learning tools to be done. He's convinced me that we can create far better learning tools without needing big technological breakthroughs. The wonders are already here.

One issue I have is his idea that the medium of content should be mnemonically based. This bothers me because I presume that if your content is really good, professionals and experts will read it as well. And since the way that they read is different from the manner of a novice, they should be able to ignore and not be interrupted or slowed by tools designed for novices.

Last month, I wrote an essay which started as a rebuttal to Matuschak's "Why Books Don't Work," on r/ssc. In the end, I didn't directly address his article, but instead explained how books in their most developed form are excellent learning tools.

Comment by JohnBuridan on What are we assuming about utility functions? · 2019-10-02T21:18:53.166Z · LW · GW

Good post with good questions:

Side analogy concerning CEV:

I have been thinking about interpretation convergence the past few days, since a friend (a medievalist) mentioned to me the other day that medieval scholastics underestimated the diversity of routes human cognition can take when thinking about a problem. They assumed that all reasonable people would converge, given enough time and resources, on the same truths and judgments.

He points out this is why early protestants felt comfortable advocating reading the Bible alone without either authorities to govern interpretations or an interpretive apparatus, such as a tradition or formal method. Turns out, Christians differed on interpretation.

Now what's interesting about this, is that the assumption of interpretation convergence was held so widely for so long. This indicates to me that value convergence relies upon a shared cognitive processing styles. It might be possible then for two AIs to considers a utility function V, but process its significance differently, depending on how they process the rest of the global state.

Comment by JohnBuridan on The Zettelkasten Method · 2019-09-28T19:13:35.521Z · LW · GW

I am an avid bullet-journaler, and while I don't expect to try Zettelkasten, I will start using one of the methods described here to make my bullet journals easier to navigate.

Research and Writing is only half of what I use my bullet journal for, but this causes notes on the same topic to spread over many pages. If I give a number to that topic, then I will be able to continue that topic throughout my journals by just adding a "dot-number." If page 21 is notes on formal models in business and I know that I will be making more notes on that same topic later. I can call Formal Models in Business 21.1, and the next time I broach the subject on page 33, I can label the page "Formal Models in Business 21.2" etc. This will allow my Table of Contents to indicate related ideas.

Thanks for the elucidation!

Comment by JohnBuridan on Meetups as Institutions for Intellectual Progress · 2019-09-23T20:56:37.708Z · LW · GW

Well said! Good thoughts. Since, you bring this up at the same time as I have been thinking about it, I feel obligated to add my current thoughts now, even though they are as yet not fully developed.

I also just started reading Walter Isaacson's biography of Ben Franklin and have been reading a few article about the so-called "Republic of Letters" which existed outside of the academy, made real scientific progress, and was contributed to by both academics and enthusiasts alike. Here are some of the dynamics that seemed to make the Republic of Letters and the British clubs work:

1) a drive to invite promising people to the group, even if they are not up-to-speed yet.

2) those personal, friendly invitations go to up and coming writers and thinkers who still have enough slack in their time to join a new community. People at the beginning of their career are the life of an organization.

3) an expectation that members not just observe but also write, share, and present essays, and host occasional focused discussions.

Comment by JohnBuridan on 10 Good Things about Antifragile: a positivist book review · 2019-04-28T19:51:11.088Z · LW · GW

Antifragile systems do exist. The ecosystem of restaurants quickly respond to demands, many go under within 5 years, but some survive. At the individual level, the restaurants are fragile, but the system of restaurants is antifragile and not going anywhere barring a major catastrophe. True, one cannot be antifragile to everything. Nonetheless, one can determine what types of disasters a system is antifragile to.

If all restaurants were run by the government, either that system would collapse or it would be some hell-hole equilibrium of high-costs and low quality.

Your point about large centralized states is well-taken, though.

Comment by JohnBuridan on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-04-07T03:12:51.489Z · LW · GW

If you assume that emotions are type of evaluation that cause fast task switching, then it makes sense to say your battlefield AI has a fear emotion. But if emotion is NOT a type of computational task, then it is ONLY by analogy that your battlefield AI has "fear."

This matters because if emotions like fear are not identifiable with a specific subjective experience, then the brain state of fear is not equivalent to the feeling of fear, which seems bizarre to say (Cf. Kripke "Naming and Necessity" p.126).

Comment by JohnBuridan on "The Unbiased Map" · 2019-01-28T02:21:17.704Z · LW · GW

I am working through questions about paradigms and historiography right now. These questions drove me to write this creative speech. I went from, "Is there such a thing in history as 'just the facts?'" and from there I went to is there anything in cartography as "just the facts." This reductio ab absurdum I hope shows that maps are used for different purposes, and there are better and worse maps for different purposes. We are looking for maps which fit our purposes. The right maps for the right purposes.

According to the line of reasoning in the reductio, there is no map which is "just the facts" without also being "all the facts" and thus becoming the territory itself.

What does this say about the craft of history? I don't know.

Comment by JohnBuridan on We Agree: Speeches All Around! · 2018-06-14T22:08:35.966Z · LW · GW

This could be the case, but only in specific political circumstances.

Only a self-deluded person would think that praising the decision after it was made would gain them influence among their peers who just talked about and made the decision. He would be noticed as Fred the weird-colleague-who-doesn't-talk-during-discussion-but-only-at-the-very-end-once-we've-decided-on-things.

Also, your alternative hypothesis, doesn't seem to account for everyone in the decision group doing it with no further audience, which is the situation I'm talking about.

Comment by JohnBuridan on Guardians of Ayn Rand · 2016-12-14T18:08:37.990Z · LW · GW


There is a distinction (and I think a good one) between canonicity and fixed ideas.

I think it is always good, adding nuance and historical depth to one's thought, to read the Canon in any subject area. My library science hero Peter Briscoe characterizes a subject area's canon saying " in general half the knowledge in any given subject is contained in or two dozen groundbreaking or synthesizing works," (pg. 11). The value of reading these "canonical" works is not that they are the dogmas YOU HAVE TO BELIEVE, but that these are the ideas you have to engage with, these are the people you need to understand, reading x, y, and z is fundamental to your engaging in conversation with this community of scholars.

The Sequences, hate some or love some, are part of the Canon around here.

Canonicity causes fixed ideas only in so far as it focuses the conversation and methodology. Responses to a certain idea "will naturally tend towards a certain, limited range of positions (like, either bodies can be infinitely divided, or not - and in the latter case one is an atomist)," (Rule 1 for History of Philosophy, Peter Adamson).

Briscoe's little book "Reading the Map of Knowledge" is, to me, canonical reading for being a rationalist. If you're interested, it's like 6 bucks.