Posts

Quick Look #1 Diophantus of Alexandria 2020-06-24T19:12:14.672Z · score: 20 (9 votes)
What is a Rationality Meetup? 2019-12-31T16:15:34.033Z · score: 4 (2 votes)
Preserving Practical Science (Phronesis) 2019-11-28T22:37:21.591Z · score: 9 (5 votes)
Saint Louis Junto Meeting 11/23/2019 2019-11-24T21:36:20.093Z · score: 3 (1 votes)
Junto: Questions for Meetups and Rando Convos 2019-11-20T22:11:55.350Z · score: 26 (10 votes)
Meetup 2019-11-20T21:21:50.889Z · score: 3 (1 votes)
Welcome to Saint Louis Junto [Edit With Your Details] 2019-11-20T21:19:38.632Z · score: 3 (1 votes)
Prediction Markets Don't Reveal The Territory 2019-10-12T23:54:23.693Z · score: 7 (6 votes)
10 Good Things about Antifragile: a positivist book review 2019-04-27T18:22:16.665Z · score: 21 (10 votes)
No Safe AI and Creating Optionality 2019-04-17T14:08:41.843Z · score: 7 (6 votes)
Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall 2019-02-23T17:48:46.837Z · score: 9 (4 votes)
"The Unbiased Map" 2019-01-27T19:08:10.051Z · score: 15 (9 votes)
We Agree: Speeches All Around! 2018-06-14T17:53:23.918Z · score: 41 (17 votes)

Comments

Comment by johnburidan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-12T12:11:07.070Z · score: 2 (2 votes) · LW · GW

Ah! Thanks so much. I was definitely conflating farsightedness as discount factor and farsightedness as vision of possible states in a landscape.

And that is why some resource increasing state may be too far out of the way, meaning NOT instrumentally convergent, - because the more distant that state is the closer its value is to zero, until it actually is zero. Hence the bracket.

Comment by johnburidan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-12T00:43:01.945Z · score: 2 (2 votes) · LW · GW

You say:

"most agents stay alive in Pac-Man and postpone ending a Tic-Tac-Toe game", but only in the limit of farsightedness (γ→1)

I think there are two separable concepts at work in these examples, the success of an agent and the agent's choices as determined by the reward functions and farsightedness.

If we compare two agents, one with the limit of farsightedness and the other with half that, farsightedness (γ→1/2), then I expect the first agent to be more successful across a uniform distribution of reward functions and to skip over doing things like Trade School, but the second agent in light of more limited farsightedness would be more successful if it were seeking power. As Vanessa Kosoy said above,

... gaining is more robust to inaccuracies of the model or changes in the circumstances than pursuing more "direct" paths to objectives.

What I meant originally is that if an agent doesn't know if γ→1, then is it not true that an agent "seeks out the states in the future with the most resources or power? Now, certainly the agent can get stuck at a local maximum because of shortsightedness, and an agent can forgo certain options as result of its farsightedness.

So I am interpreting the theorem like so:

An agent seeks out states in the future that have more power at the limit of its farsightedness, but not states that, while they have more power, are below its farsightedness "rating."

Note: Assuming a uniform reward function.

Comment by johnburidan on Clarifying Power-Seeking and Instrumental Convergence · 2020-05-06T21:01:35.067Z · score: 2 (2 votes) · LW · GW

If an agent is randomly placed in a given distribution of randomly connected points, I see why there are diminishing returns on seeking more power, but that return is never 0, is it?

This gives me pause.

Comment by johnburidan on The Gears of Impact · 2020-04-30T15:13:50.510Z · score: 1 (1 votes) · LW · GW

You said, "Once promoted to your attention, you notice that the new plan isn't so much worse after all. The impact vanishes." Just to clarify, you mean that the negative impact of the original plan falling through vanishes, right?

When I think about the difference between value impact and objective impact, I keep getting confused.

Is money a type of AU? Money both functions as a resource for trading up (personal realization of goals) AND as a value itself (for example when it is an asset).

If this is the case, then any form of value based upon optionality violates the "No 'coulds' rule," doesn't it?

For example, imagine I have a choice between hosting a rationalist meetup and going on a long bike ride. There's a 50/50 chance of me doing either of those. Then something happens which removes one of those options (say a pandemic sweeps the country or something like that). If I'm interpreting this right, then the loss of the option has some personal impact, but zero objective impact.

Is that right?

Let's say an agent works in a low paying job that has a lot of positive impact for her clients - both by helping them attain their values and helping them increase resources for the world. Does the job have high objective impact and low personal impact? Is the agent in bad equilibrium when achievable objective impact mugs her of personal value realization?

Let's take your examples of sad person with (P, EU):

Mope and watch Netflix (.90, 1). Text ex (.06, -500). Work (.04, 10). If suddenly one of these options disappeared is that a big deal? Behind my question is the worry that we are missing something about impact being exploited by one of the two terms which compose it and about whether agents in this framework get stuck in strange equilibria because of the way probabilities change based on time.

Help would be appreciated.

Comment by johnburidan on An Orthodox Case Against Utility Functions · 2020-04-18T15:19:27.668Z · score: 3 (2 votes) · LW · GW

Thank you for this.

Your characterization of Reductive Utility matches very well my own experience in philosophical discussion about utilitarianism. Most of my interlocutors object that I am proposing a reductive utility notion which suffers from incomputability (which is essentially how Anscombe dismissed it all in one paragraph, putting generations of philosophers pitted eternally against any form of consequentialism).

However, I always thought it was obvious that one need not believe that objects and moral thinking must be derived from ever lower levels of world states.

What do you think are the downstream effects of holding Reductive Utility Function theory?

I'm thinking the social effects of RUF is more compartmentalization of domains because from an agent perspective their continuity is incomputable, does that make sense?

Comment by johnburidan on mingyuan's Shortform · 2019-12-20T19:45:20.596Z · score: 1 (1 votes) · LW · GW

I think about the mystery of spelling a lot. Part of it is that English is difficult, of course. But still why does my friend who reads several long books a year fail so badly at spelling, as he has always struggled since 2nd and 3rd grade when his mom would take extra time out just to ensure that he learned his spelling words enough to pass.

I have never really had a problem with spelling and seem to use many methods when I am thinking about spelling explicitly, sound it out, picture it, remember it as a chunk, recall the language of origin to figure out dipthongs. I notice that students who are bad at spelling frequently have trouble learning foreign languages, maybe the correlation points to a common cause?

Comment by johnburidan on Causal Abstraction Intro · 2019-12-20T18:19:49.029Z · score: 4 (3 votes) · LW · GW

Strongly agree causal models need lots of visuals. I liked the video, but I also realize I understood it because I know what Counterfactuals and Causal Inference is already. I think that is actually a fair assumption given your audience and the goals of this sequence. Nonetheless, I think you should provide some links to required background information.

I am not familiar with circuits or fluid dynamics so those examples weren't especially elucidating to me. But I think as long as a reader understands one or two of your examples it is fine. Part of making this judgment depends upon your own personal intuition about how labor should be divided between author and reader. I am fine with high labor, and making a video is, IMO, already quite difficult.

I think you should keep experimenting with the medium.

Comment by johnburidan on We run the Center for Applied Rationality, AMA · 2019-12-20T17:55:45.161Z · score: 9 (4 votes) · LW · GW

What have you learned about transfer in your experience at CFAR? Have you seen people gain the ability to transfer the methods of one domain into other domains? How do you make transfer more likely to occur?

Comment by johnburidan on We run the Center for Applied Rationality, AMA · 2019-12-20T17:49:19.700Z · score: 4 (4 votes) · LW · GW

I'm sure the methods of CFAR have wider application than to Machine Learning...

Comment by johnburidan on The Tails Coming Apart As Metaphor For Life · 2019-12-20T16:54:47.820Z · score: 10 (2 votes) · LW · GW

“The Tails Coming Apart as a Metaphor for Life” should be retitled “The Tails Coming Apart as a Metaphor for Earth since 1800.” Scott does three things, 1) he notices that happiness research is framing dependent, 2) he notices that happiness is a human level term, but not specific at the extremes, 3) he considers how this relates to deep seated divergences in moral intuitions becoming ever more apparent in our world.

He hints at why moral divergence occurs with his examples. His extreme case of hedonic utilitarianism, converting the entire mass of the universe into nervous tissue experiencing raw euphoria, represents a ludicrous extension of the realm of the possible: wireheading, methadone, subverting factory farming. Each of these is dependent upon technology and modern economies, and presents real ethical questions. None of these were live issues for people hundreds of years ago. The tails of their rival moralities didn’t come apart – or at least not very often or in fundamental ways. Back then Jesuits and Confucians could meet in China and agree on something like the “nature of the prudent man.” But in the words of Lonergan that version of the prudent man, Prudent Man 1.0, is obsolete: “We do not trust the prudent man’s memory but keep files and records and develop systems of information retrieval. We do not trust the prudent man’s ingenuity but call in efficiency experts or set problems for operations research. We do not trust the prudent man’s judgment but employ computers to forecast demand,” and he goes on. For from the moment VisiCalc primed the world for a future of data aggregation, Prudent Man 1.0 has been hiding in the bathroom bewildered by modern business efficiency and moon landings.

Let’s take Scott’s analogy of the Bay Area Transit system entirely literally, and ask the mathematical question: when do parallel lines come apart or converge? Recall Euclid’s Fifth Postulate, the one saying that parallel lines will never intersect. For almost a couple thousand years no one could figure out why it was true. But it wasn’t true, and it wasn’t false. Parallel lines come apart or converge in most spaces. Only, alas, only on a flat plane in a regular Euclidean space ℝ3 do they obey Euclid’s Fifth and stay equidistant.

So what is happening when the tails come apart in morality? Even simple technologies extend our capacities, and each technological step extending the reach of rational consciousness into the world transforms the shape of the moral landscape which we get to engage with. Technological progress requires new norms to develop around it. And so the normative rules of a 16th century Scottish barony don’t become false; they become incomparable.

Now the Fifth postulate was true in a local sense, being useful for building roads, cathedrals, and monuments. And reflective, conventional morality continues to be useful and of inestimable importance for dealing with work, friends, and community. However, it becomes the wrong tool to use when considering technology laws, factory farming, or existential risk. We have to develop new tools.

Scott’s concludes that we have mercifully stayed within the bounds where we are able to correlate the contents of rival moral ideas. But I think it likely that this is getting harder and harder to do each decade. Maybe we need another, albeit very different, Bernard Riemann to develop a new math demonstrating how to navigate all the wild moral landscapes we will come into contact with.

We Human-Sized Creatures access a surprisingly detailed reality with our superhuman tools. May the next decade be a good adventure, filled with new insights, syntheses, and surprising convergences in the tails.

Comment by johnburidan on Voluntourism · 2019-12-17T03:12:14.026Z · score: 1 (1 votes) · LW · GW

Well said, but there are some things I think must be added. I think it is right to compare voluntourism to regular tourism and to judge it on its goal of increasing "local" cooperation. By your account, voluntourism should have the twin effects of increasing GDP (or the general success and efficient cooperation) of the members of the church group by a few percentage points and increasing the level of donations over many years to the voluntoured location.

When doing the math to calculate the cost-benefit analysis of these voluntourism projects we should actually write off the cost of travel because in our "voluntourism" model, we assume the travel was going to happen anyway. If that's the case, then voluntourism is almost by definition a net-positive. So I agree we shouldn't be too negative about it.

Nonetheless, I don't think we should call voluntourism effective altruism. For something to be called effectively altruistic, we should be forced to take into account the costs of the program, and the cost of a week and a half in Haiti is $2000 per person. If we assume that a person experiences a financial gain of 2% per year because of the increased group cohesion in the States, that person would have to be making 100k per year to recoup the loss compared to inflation. If the person makes more money than that and donates additional gains to the poor of Haiti, then that pays off for both him and the people of Haiti positively.

I think under these assumptions, voluntourism only reaches the threshold of being effective, when very rich people are doing it. When you are giving tens to hundreds of thousands of dollars away per year anyway, voluntouring does not make a big percentage difference to your budget, but will likely help you give to more effective causes.

Comment by johnburidan on Tabletop Role Playing Game or interactive stories for my daughter · 2019-12-16T15:57:22.505Z · score: 3 (1 votes) · LW · GW

A friend gave me Kingdom http://www.lamemage.com/kingdom/ a few years ago, and I thought it was quite good for just this purpose. He used it for his kids; I'm planning on using it for mine and to generate stories for them.

I like that it is a roleplaying system in which mind modelling takes a high priority. It is story-driven, communication oriented, and based upon having a community. I really like it. Here are some examples of play. There are no dice.

https://www.meetup.com/Story-Games-Seattle/messages/boards/thread/48963126

https://web.archive.org/web/20190212225741/https://plus.google.com/116492722966699295346/posts/1ci5W1FnAQw

https://rpggeek.com/thread/1259330/cactus-flats


Comment by johnburidan on Is Rationalist Self-Improvement Real? · 2019-12-15T04:26:59.954Z · score: 10 (5 votes) · LW · GW

There are so many arguments trying to be had at once here that it's making my head spin.

Here's one. What do we mean by self-help?

I think by self-help Scott is thinking about becoming psychologically a well-adjusted person. But what I think Jacobian means by "rationalist self-help" is coming to a gears level understanding of how the world works to aid in becoming well-adapted. So while Scott is right that we shouldn't expect rationalist self-help to be significantly better than other self-help techniques for becoming a well-adjusted person, Jacobian is right that rationalist self-help is an attempt to become both a well-adjusted person AND a person who participates in developing an understanding of how the world works.

So perhaps you want to learn how to navigate the space of relationships, but you also have this added constraint that you want the theory of how to navigate relationships to be part of a larger understanding of the universe, and not just some hanging chad of random methods without satisfactory explanations of how or why they work. That is to say, you are not willing to settle for unexamined common sense. If that is the case, then rationalist self-help is useful in a way that standard self-help is not.

A little addendum. This is not a new idea. Socrates thought of philosophy as way of life, and tried to found a philosophy which would not only help people discover more truths, but also make them better, braver, and more just people. Stoics and Epicureans continued the tradition of philosophy as a way of life. Since then, there have always been people who have made a way of life out of applying the methods of rationality to normal human endeavors, and human society since then has pretty much always been complicated enough to marginally reward them for the effort.

Comment by johnburidan on Conversational Cultures: Combat vs Nurture (V2) · 2019-12-14T23:22:47.774Z · score: 1 (1 votes) · LW · GW

In the Less Wrong community, Anti-Nurture comments are afraid of laxity with respect to the mission, while Anti-Combat commenters are afraid of a narrow dogmatism infecting the community.

Comment by johnburidan on Conversational Cultures: Combat vs Nurture (V2) · 2019-12-14T23:19:52.836Z · score: 7 (3 votes) · LW · GW

I read this post when it initially came out. It resonated with me to such an extent that even three weeks ago, I found myself referencing it when counseling a colleague on how to deal with a student whose heterodoxy caused the colleague to make isolated demands for rigor from this student.

The author’s argument that Nurture Culture should be the default still resonates with me, but I think there are important amendments and caveats that should be made. The author said:

"To a fair extent, it doesn’t even matter if you believe that someone is truly, deeply mistaken. It is important foremost that you validate them and their contribution, show that whatever they think, you still respect and welcome them."

There is an immense amount of truth in this. Concede what you can when you can. Find a way to validate the aspects of a person’s point which you can agree with, especially with the person you tend to disagree with most or are more likely to consider an airhead, adversary, or smart-aleck. This has led me to a great amount of success in my organization. As Robin Hanson once asked pointedly, “Do you want to appear revolutionary, or be revolutionary?” Esse quam videri.

Combat Culture is a purer form of Socratic Method. When we have a proposer and a skeptic, we can call this the adversarial division of labor: You propose the idea, I tell you why you are wrong. You rephrase your idea. Rinse and repeat until the conversation reaches one of three stopping points: aporia, agreement, or an agreement to disagree. In the Talmud example, both are proposers of a position and both are skeptics of the other persons’ interpretation.

Nurture Culture does not bypass the adversarial division of labor, but it does put constraints on it - and for good reason. A healthy combat culture can only exist when a set of rare conditions are met. Ruby’s follow-up post outlined those conditions. But Nurture Culture is how we still make progress despite real world conditions like needing consensus, or not everyone being equal in status or knowledge, or some people having more skin in the game than others.

So here are some important things I would add more emphasis to from the original article after about a hundred iterations of important disagreements at work and in intellectual pursuits since 2018.

1. Nurture Culture assumes knowledge and status asymmetries.

2. Nurture Culture demands a lot of personal patience.

3. Nurture Culture invites you to consider what even the most wrong have right.

4. Sometimes you can overcome a disagreement at the round table by talking to your chief adversary privately and reaching a consensus, then returning for the next meeting on the same page.

While these might be two cultures, it’s helpful to remember that there are cultures to either side of these two: Inflexible Orthodoxy and Milquetoast Relativism. A Combat Culture can resolve into a dominant force with weaponized arguments owning the culture, while a Nurture Culture can become so limply committed to their nominal goals that no one speaks out against anything that runs counter to the mission.

Comment by johnburidan on Explaining why false ideas spread is more fun than why true ones do · 2019-11-28T19:09:09.316Z · score: 3 (2 votes) · LW · GW

I agree with your point that people frequently seem more interested in the spread of enemy ideas than their own. Only a lazy thinker would hold the opinion that bad ideas because of one cause. I saw a paper just yesterday detailing that advocates of religious terrorism tended to have at least some college. That research only just recently seems to have become mainstream. Why so? The causal mechanisms are not simple for single person.

On the other hand, for looking at a group, that doesn't mean the causal mechanism has to be MORE complicated. It might even be more simple. A single person can come to a belief for complex reasons, but perhaps the same belief can propagate through society for a simpler reason than the reason each individual comes to embracing it, such as, "it's included in every 9th grade textbook" or "it allows one to find a spouse more easily." Maybe this begs the question though? How did the idea get included in every textbook? Why does the idea allow one to find a spouse more easily?

Peter Adamson's History of Philosophy without Any Gaps does not discuss the history of how ideas have spread explicitly, but he does deal with the history of the interaction of major philosophical concepts and schools of thought. Beneath the content of the arguments is the story of the history of education, literacy, and evangelization. You pick up any history of Christianity, and it will still go into detail about how different tactics and strategies for the spread of the Christian idea: domestic proselytization, missionaries offering services to kings, centers of literacy, outreach to poor people, conversions of whole households, military force, etc. In some instances we even have the tactics recorded. The Jesuits sent thousands of letters to their superiors detailing their attempts to spread Christianity among Native Americans in the Great Lakes Region.

The Rationalsphere has not even begun to develop a method or programmatic approach to the spread of rationality.

Comment by johnburidan on The unexpected difficulty of comparing AlphaStar to humans · 2019-11-28T18:10:03.409Z · score: 1 (1 votes) · LW · GW

Here to say the same thing. Say I want to discover better strategies in SC2 using AlphaStar, it's extremely important that Alphastar be employing some arbitrarily low human achievable level of athleticism.

I was disappointed when the vs TLO videos came out that TLO thought he was playing against one agent AlphaStar. But in fact he played against five different agents which employed five different strategies, not a single agent which was adjusting and choosing among a broad range of strategies.

Comment by johnburidan on Prediction markets for internet points? · 2019-10-27T22:48:25.460Z · score: 1 (1 votes) · LW · GW

For subsidy creation would it work to have a market czar (I mean board of directors) who awards additional points for active participation in questions they are most interested in? I suppose you could also just have a timer which just subsidizes low activity markets to increase their activity, but maybe that would create too many externalities...

Comment by johnburidan on Prediction Markets Don't Reveal The Territory · 2019-10-18T21:10:43.809Z · score: 1 (1 votes) · LW · GW

Your points are well-taken. And thanks for pointing out the ambiguity about what problems can be overcome. I will clarify that to something more like "problems like x and y can be overcome by subsidizing markets and ensuring the right incentives are in place for the right types of information to be brought to light."

I had already retitled this section in my doc (much expanded and clarified) 'Do Prediction Markets Help Reveal The Map?' which is a much more exact title, I think.

I am curious about what you mean by create 'a collective map', if you mean achieve localized shared understanding of the world, individual fields of inquiry do it with some success. But if you mean to create collective knowledge broad enough that 95% of people share the same models of reality, you are right, forget it. There's just too much difference among the way communities think.

As for the 14th c. John Buridan, the interesting thing about him is that he refused to join one of the schools and instead remained an Arts Master all his life, specializing in philosophy and the application of logic to resolve endless disputes in different subjects. At the time people were expected to join one the religious orders and become a Doctor of Theology. He carved out a more neutral space away from those disputations and refined the use logic to tackle problems in natural philosophy and psychology.

Comment by johnburidan on Prediction Markets Don't Reveal The Territory · 2019-10-18T20:32:59.875Z · score: 1 (1 votes) · LW · GW

Good point. But it is not just a cost problem. My conjecture in the above comment is that conditional markets are more prone to market failure because the structure of conditional questions decreases the pool of people who can participate.

I need more examples of conditional markets in action to figure out what the greatest causes of market failure are for conditional markets.

Comment by johnburidan on Planned Power Outages · 2019-10-14T20:47:54.269Z · score: 3 (2 votes) · LW · GW

And yes, our society is woefully unprepared to go more than two hours without power. I really think we should be prepared for five days at all times (not that I am, but just saying). To prepare for such things would be massively expensive and radically change communities if they had to undergo regular stress tests lasting 12 hours or so.

Comment by johnburidan on Planned Power Outages · 2019-10-14T20:43:52.283Z · score: 3 (2 votes) · LW · GW

You will be disappointed to learn that the electric companies all around the United States have little incentive to care about their poles leading to residential areas, because those areas use half as much power as industrial customers. So outages of a few hours after every thunderstorm are pretty common in Midwestern cities.

Comment by johnburidan on Prediction Markets Don't Reveal The Territory · 2019-10-13T16:57:22.561Z · score: 2 (3 votes) · LW · GW

Very good and helpful! These strategies can make prediction markets *super effective*, however getting a working prediction market on conditional statements increases the difficulty of creating a sufficiently liquid market. There exists a difficult to resolve tension between optimizing for market efficiency and optimizing for "gear discovery."

People who want to use markets do need to be wary of this problem.

Comment by johnburidan on Prediction Markets Don't Reveal The Territory · 2019-10-13T13:20:21.395Z · score: 1 (1 votes) · LW · GW

Could you say more? Do you mean a prediction market can be on conditional statements?

Comment by johnburidan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-05T13:54:18.667Z · score: 5 (2 votes) · LW · GW

Yes, digital books offer far greater potential for visualization! Books do not offer a way to play with the inputs and outputs of models, and maybe one day even online academic papers will allow us to play with the data more. I look forward to the day when modeling tools are simple enough to use that even humanities people will have no problem creating them to illustrate a point. Although, Excel really is quite good!

Maybe it's part of their excitement for their own research that Andy claims books are a bad medium for learning.

Comment by johnburidan on 10 Good Things about Antifragile: a positivist book review · 2019-10-05T04:32:50.372Z · score: 1 (1 votes) · LW · GW

On schools: I agree that it would be interesting to have many types of schools. However, it doesn't seem like an ecosystem that could work. Schools don't have strong feedback mechanisms, and for there to be different types of schools there needs to be different types of communities that these schools are serving. The current model of schools is not merely a top-down imposition, but is the evolved form the school given a larger social system.

There can only be different types of schools, when we form communities with different sets of needs.

Charter, Private, and Public schools are usually very similar. Sometimes schools do something interesting with their curriculum... usually not though. I think the place to look for different types of education are Special Education, Gifted Education, Online Learning, Homeschooling, Boarding School, and Hybrid schools. Each of these form themselves to serve a particular kind of people.

Comment by johnburidan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-05T04:14:37.632Z · score: 1 (1 votes) · LW · GW

Yeah!


Comment by johnburidan on [Link] Tools for thought (Matuschak & Nielson) · 2019-10-04T13:08:19.976Z · score: 3 (3 votes) · LW · GW

I like Andy's idea that there is a whole world of mnemonic products which have yet to be explored. And I am glad to see the insight on what is wrong with the emotional story telling of standard MOOCs. There is definitely work in the area of learning tools to be done. He's convinced me that we can create far better learning tools without needing big technological breakthroughs. The wonders are already here.

One issue I have is his idea that the medium of content should be mnemonically based. This bothers me because I presume that if your content is really good, professionals and experts will read it as well. And since the way that they read is different from the manner of a novice, they should be able to ignore and not be interrupted or slowed by tools designed for novices.

Last month, I wrote an essay which started as a rebuttal to Matuschak's "Why Books Don't Work," on r/ssc. In the end, I didn't directly address his article, but instead explained how books in their most developed form are excellent learning tools.

Comment by johnburidan on What are we assuming about utility functions? · 2019-10-02T21:18:53.166Z · score: 1 (1 votes) · LW · GW

Good post with good questions:

Side analogy concerning CEV:

I have been thinking about interpretation convergence the past few days, since a friend (a medievalist) mentioned to me the other day that medieval scholastics underestimated the diversity of routes human cognition can take when thinking about a problem. They assumed that all reasonable people would converge, given enough time and resources, on the same truths and judgments.

He points out this is why early protestants felt comfortable advocating reading the Bible alone without either authorities to govern interpretations or an interpretive apparatus, such as a tradition or formal method. Turns out, Christians differed on interpretation.

Now what's interesting about this, is that the assumption of interpretation convergence was held so widely for so long. This indicates to me that value convergence relies upon a shared cognitive processing styles. It might be possible then for two AIs to considers a utility function V, but process its significance differently, depending on how they process the rest of the global state.


Comment by johnburidan on The Zettelkasten Method · 2019-09-28T19:13:35.521Z · score: 3 (2 votes) · LW · GW

I am an avid bullet-journaler, and while I don't expect to try Zettelkasten, I will start using one of the methods described here to make my bullet journals easier to navigate.

Research and Writing is only half of what I use my bullet journal for, but this causes notes on the same topic to spread over many pages. If I give a number to that topic, then I will be able to continue that topic throughout my journals by just adding a "dot-number." If page 21 is notes on formal models in business and I know that I will be making more notes on that same topic later. I can call Formal Models in Business 21.1, and the next time I broach the subject on page 33, I can label the page "Formal Models in Business 21.2" etc. This will allow my Table of Contents to indicate related ideas.

Thanks for the elucidation!

Comment by johnburidan on Meetups as Institutions for Intellectual Progress · 2019-09-23T20:56:37.708Z · score: 14 (5 votes) · LW · GW

Well said! Good thoughts. Since, you bring this up at the same time as I have been thinking about it, I feel obligated to add my current thoughts now, even though they are as yet not fully developed.

I also just started reading Walter Isaacson's biography of Ben Franklin and have been reading a few article about the so-called "Republic of Letters" which existed outside of the academy, made real scientific progress, and was contributed to by both academics and enthusiasts alike. Here are some of the dynamics that seemed to make the Republic of Letters and the British clubs work:

1) a drive to invite promising people to the group, even if they are not up-to-speed yet.

2) those personal, friendly invitations go to up and coming writers and thinkers who still have enough slack in their time to join a new community. People at the beginning of their career are the life of an organization.

3) an expectation that members not just observe but also write, share, and present essays, and host occasional focused discussions.

Comment by johnburidan on 10 Good Things about Antifragile: a positivist book review · 2019-04-28T19:51:11.088Z · score: 3 (2 votes) · LW · GW

Antifragile systems do exist. The ecosystem of restaurants quickly respond to demands, many go under within 5 years, but some survive. At the individual level, the restaurants are fragile, but the system of restaurants is antifragile and not going anywhere barring a major catastrophe. True, one cannot be antifragile to everything. Nonetheless, one can determine what types of disasters a system is antifragile to.

If all restaurants were run by the government, either that system would collapse or it would be some hell-hole equilibrium of high-costs and low quality.

Your point about large centralized states is well-taken, though.

Comment by johnburidan on Can an AI Have Feelings? or that satisfying crunch when you throw Alexa against a wall · 2019-04-07T03:12:51.489Z · score: 1 (1 votes) · LW · GW

If you assume that emotions are type of evaluation that cause fast task switching, then it makes sense to say your battlefield AI has a fear emotion. But if emotion is NOT a type of computational task, then it is ONLY by analogy that your battlefield AI has "fear."

This matters because if emotions like fear are not identifiable with a specific subjective experience, then the brain state of fear is not equivalent to the feeling of fear, which seems bizarre to say (Cf. Kripke "Naming and Necessity" p.126).

Comment by johnburidan on "The Unbiased Map" · 2019-01-28T02:21:17.704Z · score: 12 (5 votes) · LW · GW

I am working through questions about paradigms and historiography right now. These questions drove me to write this creative speech. I went from, "Is there such a thing in history as 'just the facts?'" and from there I went to is there anything in cartography as "just the facts." This reductio ab absurdum I hope shows that maps are used for different purposes, and there are better and worse maps for different purposes. We are looking for maps which fit our purposes. The right maps for the right purposes.

According to the line of reasoning in the reductio, there is no map which is "just the facts" without also being "all the facts" and thus becoming the territory itself.

What does this say about the craft of history? I don't know.

Comment by johnburidan on We Agree: Speeches All Around! · 2018-06-14T22:08:35.966Z · score: 0 (4 votes) · LW · GW

This could be the case, but only in specific political circumstances.

Only a self-deluded person would think that praising the decision after it was made would gain them influence among their peers who just talked about and made the decision. He would be noticed as Fred the weird-colleague-who-doesn't-talk-during-discussion-but-only-at-the-very-end-once-we've-decided-on-things.

Also, your alternative hypothesis, doesn't seem to account for everyone in the decision group doing it with no further audience, which is the situation I'm talking about.

Comment by johnburidan on Guardians of Ayn Rand · 2016-12-14T18:08:37.990Z · score: 5 (5 votes) · LW · GW

:)

There is a distinction (and I think a good one) between canonicity and fixed ideas.

I think it is always good, adding nuance and historical depth to one's thought, to read the Canon in any subject area. My library science hero Peter Briscoe characterizes a subject area's canon saying " in general half the knowledge in any given subject is contained in or two dozen groundbreaking or synthesizing works," (pg. 11). The value of reading these "canonical" works is not that they are the dogmas YOU HAVE TO BELIEVE, but that these are the ideas you have to engage with, these are the people you need to understand, reading x, y, and z is fundamental to your engaging in conversation with this community of scholars.

The Sequences, hate some or love some, are part of the Canon around here.

Canonicity causes fixed ideas only in so far as it focuses the conversation and methodology. Responses to a certain idea "will naturally tend towards a certain, limited range of positions (like, either bodies can be infinitely divided, or not - and in the latter case one is an atomist)," (Rule 1 for History of Philosophy, Peter Adamson).

Briscoe's little book "Reading the Map of Knowledge" is, to me, canonical reading for being a rationalist. If you're interested, it's like 6 bucks.

Comment by johnburidan on Less Wrong lacks direction · 2015-06-05T04:36:09.693Z · score: 0 (0 votes) · LW · GW

I don't think the site as a whole needs a "new" direction. It needs continued conversation, new sub-projects, and for the members to engage with the community.

Less Wrong has developed its own conventions for argument, reference points for logic, and traditions of interpretation of certain philosophical, computational, and every day problems. The arguments all occur within a framework which implicitly furnishes the members with a certain standard of thinking and living (which we don't always live up to).

Maybe what you really want is for people in the community to find a place where they can excel and contribute more. What we need most is to continue to develop ways people can contribute. Not force the generation of projects from above.

Comment by johnburidan on Less Wrong lacks direction · 2015-06-05T04:35:12.208Z · score: 0 (0 votes) · LW · GW

ha!

Comment by johnburidan on Why the culture of exercise/fitness is broken and how to fix it · 2015-05-16T02:41:24.799Z · score: 0 (0 votes) · LW · GW

For some reason, I notice certain people, myself included, crave a certain amount of manual labor. Better prefab stuff would be great, however, you still need someone to install the stuff. And just mixing instant concrete and laying a small foundation is enough to make me feel like I'm a contributing member to the physical infrastructure of society. Despite my belief in specialization, I still want for myself what you called 'Mastery.'

Comment by johnburidan on Why the culture of exercise/fitness is broken and how to fix it · 2015-05-11T17:24:05.077Z · score: 0 (0 votes) · LW · GW

Just so you know, I think a lot of people (or maybe its just me) use competition in a wide sense, e.g. I would consider casual basketball a competition simply because there is a winner. But the motivation for playing in the first place isn't winning, the desire is, as you say, to be actively getting better at some exercise-sport with your peers.

Yeah, I guess that's true about manual labor. It burns calories, keeps you fit-ish, but doesn't build muscle (except for bailing hay, to hell with hay). Although, I would feel a lot more manly if I could restore a bathroom competently.

Comment by johnburidan on Why the culture of exercise/fitness is broken and how to fix it · 2015-05-10T02:28:29.792Z · score: 1 (1 votes) · LW · GW

Good article pulling the covers from a cultural blind spot. We do obsess over exercise as though it were something you set out to do, instead of something that is part of an activity. The logic of sports has always been more appealing to me: drive to compete and do well leads to desire to hone specific skills that will unable success in the particular context of that sport. What's exercise... can you even win that game?

You never took a turn in this article towards manual labor. I hope to hear your thoughts on gardening, home improvement, and volunteer work as they relate to exercise. What 'household/handyman' activities meet the exercise threshold, or are there any?

Comment by johnburidan on Theological Epistemology · 2015-05-06T01:28:09.044Z · score: 0 (0 votes) · LW · GW

That is way of looking at the Summa made me chuckle. Aquinas was a theologian and did his duty toward the Church, I suppose. I tend to be very sympathetic towards certain medieval philosophers whom I believe didn't use their intelligence disingenuously. Peter Abelard wrote his entire ethics and metaphysics without any reference to religion as did certain other Jewish and Muslim philosophers who were in the business of showing how flimsy the arguments of others philosophers were, even if those arguments came to similar conclusions about the existence of God or the eternity of the universe. In Medieval Paris theological issues were left to theologians and philosophical issues to the Arts faculty. The Church and University exhorted people to stay within their respective fields and failure to do so would put one in danger of censure. Its an interesting tidbit.

Comment by johnburidan on Theological Epistemology · 2015-05-05T15:08:22.375Z · score: 3 (3 votes) · LW · GW

You know of Edward Feser! God, I hate that guy (pun intended). If I didn't respect books so much, I would have torn many pages out of The Last Superstition. His expression of Scholasticism is absurdly simplistic. But you lay out what Feser would say very well. I don't find him an accurate expositor of Aquinas at all; he's ridiculously uncomfortable with ambiguity and so makes his arguments by fudging definitions and appealing to intuition. He's the opposite of a decent scholastic.

I would venture to say that the majority of medieval scholars don't do what Feser does with definitions. But Feser is afraid of secularism in a way I don't think medieval intellectuals were. Does that jive with your understanding of this stuff? I think the argument you made would be made today but would not have been accepted in the 13th C. on the grounds that although God is the source of the natural goals for different species, it does not follow that he personally loves particulars (Avicenna didn't even think God could know particulars).

I'm sorry we're talking about Scholasticism on LW...

Comment by johnburidan on Theological Epistemology · 2015-05-05T05:08:02.005Z · score: 1 (1 votes) · LW · GW

In the best-case scenario, these spokesmen are able to come to the conclusion that God is not lacking in power and is incapable of deception using just logic and natural philosophy, aka science. Revelation isn't knowledge in the same way that philosophy and science provide knowledge. Revelation is knowledge gained by an act of the will, i.e, you just assent to it. The other types of knowledge are gained by human reason through the senses.

Many people throughout theological history have thought they could not only prove the existence of God, but also prove he has those qualities which we generally associate with God, like omnipresence, simplicity, and goodness. Many of these arguments do prove something, but generally not something we would consider a loving, personal God. For that you generally need a Holy Writ and Divine Inspiration.

In theological epistemology there is a logical impossibility for the Supreme Being to do something heinous. If the source of the inspiration is indeed God, you will not need to doubt its truth (you'd just do that assent thing). But what if the inspiration isn't from God, but a very powerful, invisible, and ineffable being that seems similar? Now we're cooking with oil. How would we know? Could you tell the difference?

Here's a digression.

Imagine a voice comes to you and says, "I want you to be the Father of my people. You will have a son even though your wife is wicked old." Then you discover that your wife is pregnant. You have a son! Later the voice comes again and says, "Kill your only-begotten son, even though you love him, in my name." When you go to kill your son, an angel of the same God stops you at the last moment, and your faith that the voice was not evil is vindicated (supposedly).

This is the ancient story of Abraham and Isaac in the Hebrew Bible. Abraham is the Father of all three monotheistic traditions today. Why did Abraham think the one God was speaking to him and not some demon? How is it that God can make what seems an unethical command? This is the subject of Kierkegaard's book, Fear and Trembling.

End of digression.

I think at a practical level, we have to reject the type of skepticism you are proposing. If we did live in such a world, there would be very little, if any, reliability in inductive reasoning, and we would have to radically doubt all knowledge that wasn't either tautological or reducible to non-contradiction. Imagine if the Abrahamic God did exist but wasn't God, just a powerful, deceiving spirit who has been working in the world, pretending to love it this entire time.

If observation is tampered with, you can't know for certain. If it isn't tampered with, you might accept something like, "there is an act of love which a pretending God couldn't fake." Choose your Schelling-point for true love vs. seeming love and go from there.

Comment by johnburidan on Theological Epistemology · 2015-05-05T03:05:04.281Z · score: 5 (9 votes) · LW · GW

There is some confusion here. Asking Less Wrong flavored questions using theological terms generally requires misusing the terms. This is unfortunate, because these questions are really interesting, but most us don't have the requisite understanding of theology to do it well (including myself obviously(although, I venture that I might know more than most(#nesting))). So, my answer will be really disappointing.

In the monotheistic theology of Islam (represented by Al-Farabi), Judaism (represented by Maimonides), and, Christianity (represented by Thomas Aquinas), when it is said that God is omnipotent, they are saying God is not lacking in power, not that God can actually will to do any particular action whatsoever. In this way God is restrained. For example, God cannot create a rock so heavy that he cannot lift it because that is not a logical possibility. Or as a mathematician once said, "Nonsense is nonsense, even if you say it about God."

To your question about a loving God's possibility to deceive. This is a tough question because it is several in one. Can God deceive, can God's nature be learned about through observation of the created universe, and can God deceive about his nature? The first two questions are contested within each faith tradition, the third question (which I think is most relevant here) third is not disputed by the three philosophers. They all would say, "No."

I'm going to summarize a really long arguments the best I can: since God is a self-caused simple being (having no parts and lacking in no quality), his intellect (it's an operation) can only be directed toward Truth and his will (it's his other operation) can only be directed toward the Good (which is love).

This argument requires that we agree that Truth and Goodness have a primary level of existence, whereas falsity and evil exist contingently on the existence of truth and goodness. Since God has no parts, he cannot be oriented towards the composite essences of falsity and evil.

This is definitely an unsatisfying solution for most of us. The major problem for us approaching Theological Epistemology, as I see it, is that we have to start by explaining what metaphysics we are willing to accept and what we aren't.

Comment by johnburidan on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-30T21:41:31.182Z · score: 0 (0 votes) · LW · GW

I agree with some provision. My counter-examples can be shown to lead to bad effects, but only in an ad hoc kind of way. I think the GRE cheater could potentially justify his/her actions by pointing toward other evils in society (like nepotism or it's-who-you-know-ism) that require him getting an edge on this allegedly stupid test in order to succeed in a world more interested in money, favors, and quantifying smarts, than it is in true intelligence. He may also counter that there is no "slot" he takes by doing as well as someone with "higher ability" if the ability measured is merely the ability to take the GRE, which our cheater contends it is. There is never an end to the litany of justifications, contingent realities, where a greater good is brought out, or a systematic evil exposed, etc. etc.

I mean what were these people thinking? I hesitate to wag my finger only to point out they are hurting other people by this behavior. Is that that is THE rational argument? Do you think demonstrating the second order effects are the most convincing way to demonstrate the wrongness of cheating? My reasons for not cheating aren't solely based on the effects my actions may, but not necessarily, have on others. I also desire to achieve the happiness that comes from excellence at something. As I mentioned above, I think you need both rationales.

Comment by johnburidan on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-30T18:01:08.189Z · score: 1 (1 votes) · LW · GW

I have trouble seeing two things: It seems to me not all theists reject terminal values, for example, beatitude (transcendental happiness) for some theists is a terminal value, for others serving God is terminal (so to speak); and it also seems theism can be reconciled with Heidegger by being a terminal value itself freely chosen in order to save me from my geworfenheit.

"Save me from my geworfenheit" being a customary household phrase. :)

Comment by johnburidan on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-30T17:41:31.806Z · score: -1 (1 votes) · LW · GW

Cheating and lying does not always devalue other people's happiness though. Cheating on the GRE doesn't obviously hurt other people. Lying (or misdirection) sometimes spares someone a painful truth or leaves them none the wiser. Like when a kid lies to his dad about where he was earlier this afternoon. These pretty simple counter-examples don't refute your point fully. I propose them because I think there is something lacking to say the only reason we can't cheat and lie our way to the good life is because it hurts other people's happiness. Sometimes it doesn't.

But cheating in Axis & Allies always separates the agent from the opportunity to gain the happiness that comes from being an excellent Axis & Allies player. I think this type of happiness must be part of your moral reasoning too.

Comment by johnburidan on Defeating the Villain · 2015-03-29T18:09:27.249Z · score: 1 (1 votes) · LW · GW

Well, Nancy, Melkor was imprisoned once before by the Valar. They thought he had been rehabilitated and were mistaken; he destroyed the Two Trees of Valinor.

He will return in the end for Ragnorok called Dagor Dagorath. You are right, the outcomes cannot be known by us. I assume he will be vanquished totally, but Eru's creation is incomplete. Something unexpected may yet happen.

Comment by johnburidan on Some famous scientists who believed in a god · 2015-03-27T12:00:27.401Z · score: 3 (3 votes) · LW · GW

This year's receiver of the Carl Sagan Award was a Jesuit Brother. I find it very funny, although I don't know if I should.

From what I understand .there are a lot of established and respectable scientists who are theists. Anyone could go on a treasure hunt for more, but it doesn't prove anything. It's just a numbers game.