I'm excited to see this, and have been enjoying using it.
I'm curious though; is it interoperable with other methods, or are there any projects on the horizon to convert this to Markdown if needed? I would like to have backups or possibly later post some of my LW content to other platforms.
I realize that probably couldn't keep all the functionality (no fully isomorphic translation), but even some of it could be handy.
Quick thoughts: 1) I found this interesting, thanks! 2) There's a significant gender imbalance in the Bay Area scene at least. 3) Lots of EAs I know are in Academia; either getting advanced degrees or doing research. Academia presents a lot of challenges to relationships, I would be curious how our statistics compare to those in Academia. 4) Many of my non-EA friends seem to place great value on relationships, my EA friends less so. 5) I would be surprised if poly actually made that big of a difference. My guess would be that it's not actually that popular in EA. I get the impression that poly sounds radical and has been discussed by some of the key people, so seems like a much bigger deal than it really is.
Thanks so much Filipe, and I'm excited to see your thoughts on the topic. I think this kind of imagining is highly valuable.
I don't have much context about you personally, but from my engineering and entrepreneurial experience, my main piece of feedback would be that I get the sense that you think this might be a whole lot easier than I think it would be. Something like what you propose sounds very interesting, but I think this initial proposal would be challenging to do well without tons of money and time. I've seen my fair share of people start far overambitious projects, totally (though predictably) fail, and be heartbroken as a result.
I think it's worthwhile to do the following, but think about them in distinct buckets: 1) Imagine what great systems would be like with near unbounded resources. 2) Figure out what pragmatic steps we can take in the short term to get started.
Both of these are valuable. All of my post was in the former camp, and I would suggest that your post mostly is as well.
Some thoughts on the comment, in the vein of category (1):
Translators in the platform could give a score (from 0 to 10) of how good that translation looked for different translation formats
This is a minor point, but I would suggest a system where people rank who good the translation is for individual people (with many defined attributes), instead of trying to bucket things into different categories. Defining the categories is a really messy process that will leave artifacts. This is kind of a classic ML prediction sort of problem.
Thus, we could create a market for expansive translations focused on people of different styles.
I think that the current infrastructure for setting up markets in the regular ways are quite mediocre. Another option would be to hire a team of translators working full-time, but monitor and optimize their performance.
On the topic of obtaining source data, using new content generation would be very expensive, and I could imagine it being difficult to do well. I think the word for "expansive translators" isn't "translator", but "communicator", for instance, so the people to learn from are the popular communicators, not people with translation experience.
I think there's already a lot of content out there if you're a bit creative. There are probably tens of thousands of "What is Bitcoin" posts on YouTube and other platforms aimed at a wide variety of audiences, combined with metrics for how popular these are. If you could find ways of learning from those, I would be more optimistic.
Our new expansive-translations dot com, ou our new chrome extension.
Arbital had features kind of like what I'm suggesting. They identified a need, but found it very challenging to get people to actually do the writing. I suggest checking out the comments from that thread to learn about their experiences.
I'd be enthusiastic about making browser extensions to augment LessWrong in some key ways. It's possible translation could start small; like with the replacement (hopefully with hovers that demonstrate this) of some key words with words one may better know.
You'd have to have a very clear goal in mind when constructing your professional-context postmodern punk musical Hamlet, and the choice of that goal would make a huge difference to the end product.
Agreed. This is a radical definition.
As translation gets more and more expansive, it becomes more difficult to ensure consistency and quality. But it also leads to a lot of value generation, so can often be worth it.
Hamilton, the Musical, was arguably a retelling / "expansive translation" of the book, which itself was a summary of the original documents. I think most people who originally heard about the idea of Hamilton thought it could never work because of how weird (and expansive) it was. Not only was it presented for people who liked musicals, but it was sort of optimized to appeal specifically to communities of color. It doesn't only translate the older dialects into modern English, but it converts it specifically to the vernacular and musical preferences of parts of Hip Hop culture.
I'm a big fan of that. I'm sure a lot of information was lost along the way, but the value proposition of this dramatic reinterpretation is clear to many viewers.
Now, not every potential translator may be as talented as Lin-Manuel Miranda now, but the potential is still clear, and in the future we'll have AI to help us.
Why are modern translations so narrow? What level of nuance would you like them to capture?
By narrow I mean they are aiming to provide language-language translation, but they could hypothetically done on a much more granular level. For instance, a translation that matches the very specific vernacular of some shared Dutch & Jamaican family with its own code words. And there’s no reason the semantics can’t be considerably changed. Maybe Hamlet could be adjusted to take place in whichever professional context a small community would be most likely to understand, and then presented as a post modern punk musical because that community really likes post modern punk musicals. Whatever works.
One could argue that "liberal translations could never improve on the source, and therefore we need to force everyone to only use the source." I disagree.
In translations of poetry - something I have amateur experience with - you have a lot of decisions to make.
I'm sure there must be a far greater deal of similar discussion around Biblical translations. See the entire field of Hermeneutics, for instance.
That said, I'd note I'm personally interested in this for collective epistemic reasons. I think that the value of "an large cluster of people can better understand each other and thus do much better research and epistemic and moral thinking" is a bigger priority than doing this for artistic reasons, though perhaps it's less interesting.
No Fear Shakespeare 'translates' the original plays into modern English, which I admit is a helpful idea, but there's a problem with these beyond just the feeling of being juvenile: the 'translations' are often wrong, sometimes blatantly so.
Agreed that translations are often wrong, but I don't think this is reason to give up on them! Translations between languages often fail, but I'm thankful we have them.
The alternative to translation that I was taught in school about Shakespeare was to just give us the source and have us figure it out. I'm absolutely sure we did a terrible job at it, even worse than that bad translation. I don't remember ever having a lesson on how to translate Early Modern English to Modern English. I think I barely understood how large the difference was, let alone interpreted it correctly.
My knowledge on this topic comes from the Great Courses course "The Story of Language" by John McWhorter. Lecture 7 is great and goes into detail on the topic.
"We don't process Shakespeare as readily as we often suppose. With all humility I think there is a kind of mythology - a bit of a hoax - surrounding our reception of Shakespeare as educated people. And I will openly admit that, except when I have read a Shakespeare play - and this is particularly the tragedies - when I go and hear it, cold, at normal speed, I don't understand enough to make the evening worth it.
"I don't like to admit it - I learned long ago that you're not supposed to say so - but it's true. And even as somebody who loves languages and is familiar with English and all its historical layers, I have seen The Tempest not once, not twice, but three times, never having gotten down to reading that particular play, I have never known what in the world was going on in that play.
"And I seriously doubt if I am alone. And it's not that the language is poetry. Poetry's fine. It's because Shakespeare in many ways was not writing in the language that I am familiar with. It's been many many centuries and the language has changed.
"One friend of mine said that the only time he had gone to Shakespeare and really genuinely understood it the way we understand a play by O'Neal or by Tony Kushner is when he saw Hamlet in France because it was in relatively modern French and he was very good at French."
In regards to being able to read "the same thing" as other people; I would of course agree this is one benefit of the current system. Any novel system will have downsides, this is a downside for sure. I think the upsides are far more significant than this one downside at least. Generally we don't mind tutors or educational YouTube courses that are made to be particularly useful for small groups of people, even though these things do decrease the amount of standardization.
we don't have a great track record of using technology like this wisely and not overusing it
Agreed. With great power comes great responsibility, and often we don't use that responsibility that well. But two things: 1) The upsides are really significant. If "being really good" at teaching people generic information is too powerful to be scary, that doesn't leave us much hope for other tech advancements. 2) Even if it comes out to be net-negative, it could be useful to investigate further (like investigating if it is net-negative).
Yea; I think mixtures of continuous + discrete conditionals should open up a bunch of options. I imagine it's hard to grok all of these without using it a bit, so I do look forward to publishing it more openly and encouraging people to "mess around".
I think the space of options is quite massive, though the technical and academic challenges massive as well.
Western culture is known for being individualistic instead of collectivist. It's often assumed (with evidence) that individualistic cultures tend to be more truth seeking than collectivist ones, and that this is a major advantage.
But theoretically, there could be highly truth seeking collectivist cultures. One could argue that Bridgewater is a good example here.
In terms of collective welfare, I'm not sure if there are many advantages to individualism besides the truth seeking. A truth seeking collectivist culture seems pretty great to me, in theory.
On First Principles Land: Even if they are ideal Bayesians, they could come to a mistake with unfortunate evidence. I'm not sure how we should handle updating on the information of others, that complicates things significantly. I was mostly imagining this as each person independently acts as a semi-ideal Bayesian agent and knows everything from the fundamental truths and evidence themselves. I would be interested in variations with various kinds of knowledge sharing.
On Mimesis Land: Yea, this land is confusing to me too. I guess belief manipulation would essentially act as an evolutionary process. Some clusters would learn some techniques for belief selection, and the successful clusters would pass on these belief-selection techniques. That said, this would take a while, and a most people could be oblivious to this.
I finally got around to changing this. I came back to this article, also was confused by "clarification" when I first skimmed it. I agree more now that it was a pretty poor word to use originally, apologies!
I found this article interesting: https://www.thegentlemansjournal.com/25-iconic-moments-that-define-the-21st-century-thus-far/
It lists several events that caused large celebrations. However, you can notice a pattern: 2008 — Barack Obama wins the 2008 election, becoming the first African American President 2011 — Commandos conduct a raid in Pakistan, which ends with the killing of Osama bin Laden 2012 — The US rover, Curiosity, takes a selfie on Mars 2014 — Malala Yousafazi becomes the youngest ever recipient of a Nobel Prize 2015 — Same-sex marriage is legalised across all fifty states in the USA
Almost all were political or nontechnical.
Personally, I think that most kinds of modern technology are highly incremental, and as of recent have been treated with suspicion.
I also could imagine that real technology change has slowed down a fair bit (especially outside of AI), as has been discussed extensively.
I like the idea of "The the fallacy". Whenever there's a phrase called "The X", that presupposes that there is one X, and that's typically not true.
In this case, the ideas of the reference class or the outside view are dramatic simplifications. These seem like weak heuristics that often are valuable, but are difficult to translate to a construct that you can apply intense reasoning on. There's no great formal definition yet, and until there is, trying to do careful analysis seems challenging to me.
I agree that "considering multiple models" is generally best, where possible. It's hard to argue against this though.
I keep seeing posts about all the terrible news stories in the news recently. 2020 is a pretty bad year so far.
But the news I've seen people posting typically leaves out most of what's been going on in India, Pakistan, much of the Middle East as of recent, most of Africa, most of South America, and many, many other places as well.
The world is far more complicated than any of us have time to adequately comprehend. One of our greatest challenges is to find ways to handle all this complexity.
The simple solution is to try to spend more time reading the usual news. If the daily news becomes three times as intense, spend three times as much time reading it. This is not a scalable strategy.
I'd hope that over time more attention is spent on big picture aggregations, indices, statistics, and quantitative comparisons.
This could mean paying less attention to the day to day events and to individual cases.
I was recently pointed to the Youtube channel Psychology in Seattle. I think it's one of my favorites in a while.
I'm personally more interested in workspace psychology than relationship psychology, but my impression is that they share a lot of similarities.
Emotional intelligence gets a bit of a bad rap due to the fuzzy nature, but I'm convinced it's one of the top few things for most people to get better at. I know lots of great researchers and engineers who repeat a bunch of repeated failure modes, and this causes severe organizational and personal problems as a result.
Emotional intelligence books and training typically seem quite poor to me. The alternative format here of "let's just show you dozens of hours of people interacting with each other, and point out all the fixes they could make" seems much better than most books or lectures I've seen.
This Youtube series does an interesting job at that. There's a whole bunch of "let's watch this reality TV show, then give our take on it." I'd be pretty excited about there being more things like this posted online, especially in other contexts.
Related, I think the potential of reality TV is fairly underrated in intellectual circles, but that's a different story.
Fair point. I imagine when we are planning for where to aim things though, we can expect to get better at quantifying these things (over the next few hundred years), and also aim for strategies that would broadly work without assuming precarious externalities.
The 4th Estate heavily relies on externalities, and that's precarious.
There's a fair bit of discussion of how much of journalism has died with local newspapers, and separately how the proliferation of news past 3 channels has been harmful for discourse.
In both of these cases, the argument seems to be that a particular type of business transaction resulted in tremendous positive national externalities.
It seems to me very precarious to expect that society at large to only work because of a handful of accidental and temporary externalities.
In the longer term, I'm more optimistic about setups where people pay for the ultimate value, instead of this being an externality. For instance, instead of buying newspapers, which helps in small part to pay for good journalism, people donate to nonprofits that directly optimize the government reform process.
If you think about it, the process of:
People buy newspapers, a fraction of which are interested in causing change.
Great journalists come across things around government or society that should be changed, and write about them.
A bunch of people occasionally get really upset about some of the findings, and report this to authorities or vote differently.
is all really inefficient and roundabout compared to what's possible. There's very little division of expertise among the public for instance, there's no coordination where readers realize that there are 20 things that deserve equal attention, so split into 20 subgroups. This is very real work the readers aren't getting compensated for, so they'll do whatever they personally care the most about at the moment.
Basically, my impression is that the US is set up so that a well functioning 4th estate is crucial to making sure things don't spiral out of control. But this places great demands on the 4th estate that few people now are willing to pay for. Historically this functioned by positive externalities, but that's a sketchy place to be. If we develop better methods of coordination in the future I think it's possible to just coordinate to pay the fees and solve the problem.
For those reading, the main thing I'm optimizing Foretold for right now, is for forecasting experiments and projects with 2-100 forecasters. The spirit of making "quick and dirty" questions for personal use conflicts a bit with that of making "well thought out and clear" questions for group use. The latter are messy to change, because it would confuse everyone involved.
Note that Foretold does support full probability distributions with the guesstimate-like syntax, which prediction book doesn't. But it's less focused on the quick individual use case in general.
If there are recommendations for simple ways to make it better for individuals; maybe other workflows, I'd be up for adding some support or integrations.
[retracted: I read the question too quickly, misunderstood it]
My impression, after some thought and discussion (over the last ~1 year or so), is that people being smarter / predicting better will probably decrease the number of wars and make them less terrible. That said, there are of course tails; perhaps some specific wars could be far worse (one country being much better at destroying another).
As I understand it, many wars in part started due to overconfidence; both sides are overconfident on their odds of success (due to many reasons). If they were properly calibrated, they would more likely partake in immediate trades/consessions or similar, rather than take fights, which are rather risky.
Similarly, I wouldn't expect different AGIs to physically fight each other often at all.
Thanks! I've looked at (2) a bit and some other work on Information Architecture.
I've found it interesting but kind of old-school, it seems to have been a big deal when web tree navigation was a big thing, and to have died down after. It also seems pretty applied; as in there isn't a lot of connection with academic theory in how one could think about these classifications.
I'm sure this has been discussed elsewhere, including on LessWrong. I haven't spent much time investigating other thoughts on these specific lines. Links appreciated!
The current model of a classically rational agent assume logical omniscience and precomputed credences over all possible statements.
This is really, really bizarre upon inspection.
First, "logical omniscience" is very difficult, as has been discussed (The Logical Induction paper goes into this).
Second, all possible statements include statements of all complexity classes that we know of (from my understanding of complexity theory). "Credences over all possible statements" would easily include uncountable infinities of credences. One could clarify that even arbitrarily large amounts of computation would not be able to hold all of these credences.
Precomputation for things like this is typically a poor strategy, for this reason. The often-better strategy is to compute things on-demand.
A nicer definition could be something like:
A credence is the result of an [arbitrarily large] amount of computation being performed using a reasonable inference engine.
It should be quite clear that calculating credences based on existing explicit knowledge is a very computationally-intensive activity. The naive Bayesian way would be to start with one piece of knowledge, and then perform a Bayesian update on each next piece of knowledge. The "pieces of knowledge" can be prioritized according to heuristics, but even then, this would be a challenging process.
I think I'd like to see specification of credences that vary with computation or effort. Humans don't currently have efficient methods to use effort to improve our credences, as a computer or agent would be expected to.
Solomonoff's theory of Induction or Logical Induction could be relevant for the discussion of how to do this calculation.
Intervention dominance arguments for consequentialists
There's a fair bit of resistance to long-term interventions from people focused on global poverty, but there are a few distinct things going on here. One is that there could be a disagreement on the use of discount rates for moral reasoning, a second is that the long-term interventions are much more strange.
No matter which is chosen, however, I think that the idea of "donate as much as you can per year to global health interventions" seems unlikely to be ideal upon clever thinking.
For the last few years, the cost-to-save-a-life estimates of GiveWell seem fairly steady. The S&P 500 has not been steady, it has gone up significantly.
Even if you committed to purely giving to global heath, you'd be better off if you generally delayed. It seems quite possible that if every life you would have saved in 2010, you could have saved 2 or more if you would have saved the money and spent it in 2020, with a decently typical investment strategy. (Arguably leverage could have made this much higher.) From what I understand, the one life saved in 2010 would likely not have resulted in one extra life equivalent saved in 2020; the returns per year was likely less than that of the stock market.
One could of course say something like, "My discount rate is over 3-5% per year, so that outweighs this benefit". But if that were true it seems likely that the opposite strategy could have worked. One could have borrowed a lot of money in 2010, donated it, and then spent the next 10 years paying that back.
Thus, it seems conveniently optimal if one's enlightened preferences would suggest not either investing for long periods or borrowing.
One obvious counter to immediate donations would be to suggest that the EA community financially invests money, perhaps with leverage.
While it is difficult to tell if other interventions may be better, it can be simpler to ask if they are dominant; in this case, that means that they predictably increase EA-controlled assets at a rate higher than financial investments would.
A good metaphor could be to consider the finances of cities. Hypothetically, cities could invest much of their earnings near-indefinitely or at least for very long periods, but in practice, this typically isn't key to their strategies. Often they can do quite well by investing in themselves. For instance, core infrastructure can be expensive but predictably lead to significant city revenue growth. Often these strategies area so effective that they issue bonds in order to pay more for this kind of work.
In our case, there could be interventions that are obviously dominant to financial investment in a similar way. An obvious one would be education; if it were clear that giving or lending someone money would lead to predictable donations, that could be a dominant strategy to more generic investment strategies. Many other kinds of community growth or value promotion could also fit into this kind of analysis. Related, if there were enough of these strategies available, it could make sense for loans to be made in order to pursue them further.
What about a non-EA growth opportunity? Say, "vastly improving scientific progress in one specific area." This could be dominant (to investment, for EA purposes) if it would predictably help EA purposes by more than the investment returns. This could be possible. For instance, perhaps a $10mil donation to life extension research could predictably increase $100mil of EA donations by 1% per year, starting in a few years.
One trick with these strategies is that many would fall into the bucket of "things a generic wealthy group could do to increase their wealth"; which is mediocre because we should expect that type of things to be well-funded already. We may also want interventions that differentially change wealth amounts.
Kind of sadly, this seems to suggest that some resulting interventions may not be "positive sum" to all relevant stakeholders. Many of the "positive sum in respect to other powerful interest" interventions may be funded, so the remaining ones could be relatively neutral or zero-sum for other groups.
 I'm just using life extension because the argument would be simple, not because I believe it could hold. I think it would be quite tricky to find great options here, as is evidenced by the fact that other very rich or powerful actors would have similar motivations.
I'm quite curious how this ordering correlated with the original LessWrong Karma of each post, if that analysis hasn't been done yet. Perhaps I'd be more curious to better understand what a great ordering would be. I feel like there are multiple factors taken into account when voting, and it's also quite possible that the userbase represents multiple clusters that would have distinct preferences.
One nice thing about cases where the interpretations matter, is that the interpretations are often easier to measure than intent (at least for public figures). Authors can hide or lie about their intent or just never choose to reveal it. Interpretations can be measured using surveys.
It seems like there are a few distinct kinds of questions here.
You are trying to estimate the EV of a document.
Here you want to understand the expected and actual interpretation of the document. The intention only matters to how it effects the interpretations.
You are trying to understand the document. Example: You're reading a book on probability to understand probability.
Here the main thing to understand is probably the author intent. Understanding the interpretations and misinterpretations of others is mainly useful so that you can understand the intent better.
You are trying to decide if you (or someone else) should read the work of an author.
Here you would ideally understand the correctness of the interpretations of the document, rather than that of the intention. Why? Because you will also be interpreting it, and are likely somewhere in the range of people who have interpreted it. For example, if you are told, "This book is apparently pretty interesting, but every single person who has attempted to read it, besides one, apparently couldn't get anywhere with it after spending many months trying", or worse, "This author is actually quite clever, but the vast majority of people who read their work misunderstand it in profound ways", you should probably not make an attempt; unless you are highly confident that you are much better than the mentioned readers.
Communication should be judged for expected value, not intention (by consequentialists)
TLDR: When trying to understand the value of information, understanding the public interpretations of that information could matter more than understanding the author's intent. When trying to understand the information for other purposes (like, reading a math paper to understand math), this does not apply.
If I were to scream "FIRE!" in a crowded theater, it could cause a lot of damage, even if my intention were completely unrelated. Perhaps I was responding to a devious friend who asked, "Would you like more popcorn? If yes, should 'FIRE!'".
Not all speech is protected by the First Amendment, in part because speech can be used for expected harm.
One common defense of incorrect predictions is to claim that their interpretations weren't their intentions. "When I said that the US would fall if X were elected, I didn't mean it would literally end. I meant more that..." These kinds of statements were discussed at length in Expert Political Judgement.
But this defense rests on the idea that communicators should be judged on intention, rather than expected outcomes. In those cases, it was often clear that many people interpreted these "experts" as making fairly specific claims that were later rejected by their authors. I'm sure that much of this could have been predicted. The "experts" often definitely didn't seem to be going out of their way to be making their after-the-outcome interpretations clear before-the-outcome.
I think that it's clear that the intention-interpretation distinction is considered highly important by a lot of people, so much so as to argue that interpretations, even predictable ones, are less significant in decision making around speech acts than intentions. I.E. "The important thing is to say what you truly feel, don't worry about how it will be understood."
But for a consequentialist, this distinction isn't particularly relevant. Speech acts are judged on expected value (and thus expected interpretations), because all acts are judged on expected value. Similarly, I think many consequentialists would claim that here's nothing metaphysically unique about communication as opposed to other actions one could take in the world.
Some potential implications:
Much of communicating online should probably be about developing empathy for the reader base, and a sense for what readers will misinterpret, especially if such misinterpretation is common (which it seems to be).
Analyses of the interpretations of communication could be more important than analysis of the intentions of communication. I.E. understanding authors and artistic works in large part by understanding their effects on their viewers.
It could be very reasonable to attempt to map non probabilistic forecasts into probabilistic statements based on what readers would interpret. Then these forecasts can be scored using scoring rules just like those as regular probabilistic statements. This would go something like, "I'm sure that Bernie Sanders will be elected" -> "The readers of that statement seem to think the author applying probability 90-95% to the statement 'Bernie Sanders will win'" -> a brier/log score.
Note: Please do not interpret this statement as attempting to say anything about censorship. Censorship is a whole different topic with distinct costs and benefits.
For what it's worth, I predict that this would have gotten more upvotes here at least with different language, though I realize this was not made primarily for LW.
my personal opinion is that LW shouldn't cater to people who form opinions on things before reading them and we should discourage them from hanging out here.
I think this is a complicated issue. I could appreciate where it's coming from and could definitely imagine things going too far in either direction. I imagine that both of us would agree it's a complicated issue, and that there's probably some line somewhere, though we may of course disagree on where specifically it is.
A literal-ish interpretation of your phrase there is difficult for me to interpret. I feel like I start with priors on things all the time. Like, if I know an article comes from The NYTimes vs. The Daily Stormer, that snippet of data itself would give me what seems like useful data. There's a ton of stuff online I choose not to read because it seems to be from sources I can't trust for reasons of source, or a quick read of headline.
I would guess that one reason why you had a strong reaction, and/or why several people upvoted you so quickly, was because you/they were worried that my post would be understood by some as "censorship=good" or "LessWrong needs way more policing".
If so, I think that's a great point! It's similar to my original point!
Things get misunderstood all the time.
I tried my best to make my post understandable. I tried my best to condition it so that people wouldn't misinterpret or overinterpret it. But then my post was misunderstood (from what I can tell, unless I'm seriously misunderstanding Ben here) literally happened within 30 minutes.
My attempt provably failed. I'll try harder next time.
Did you interpret me to say, "One should be sure that zero readers will feel offended?" I think that would clearly be incorrect. My point was that there are cases where one may believe that a bunch of readers may be offended, with relatively little cost to change things to make that not the case.
For instance, one could make lots of points that use alarmist language to poison the well, where the language is technically correct, but very predictably misunderstood.
I think there is obviously some line. I imagine you would as well. It's not clear to me where that line is. I was trying to flag that I think some of the language in this post may have crossed it.
Apologies if my phrasing was misunderstood. I'll try changing that to be more precise.
I think I'm fairly uncomfortable with some of the language in this post being on LessWrong as such. It seems from the other comments that some people find some of the information useful, which is a positive signal. However, there are 36 votes on this, with a net of +12, which is a pretty mixed signal. My impression is that few of the negative voters gave descriptive comments.
I think with any intense language the issue isn't only "Is this effective language to convey the point without upsetting an ideal reader", but also something like, "Given that there is a wide variety of readers, are we sufficiently sure that this will generally not needlessly offend or upset many of them, especially in ways that could easily be improved upon?"
I could imagine casual readers quickly looking at this and assuming it's related to the PUA community or similar groups that have some sketchy connotations.
This presents two challenges. First, anyone who makes this inference may also assume that other writers on LessWrong share similar beliefs to what they think this kind of writing signals to them. Second, it may attract other writing that may be quite bad in ways we definitely don't want.
I would suggest that in the future, posts either don't use such dramatic language here, or in the very least just done as link posts.
I'd be curious if others have takes on this issue; it's definitely possible my intuitions are off here.
Nice post! I found the diagrams particularly readable, it makes a lot of sense to me to have them in such a problem.
I'm not very well-read on this sort of work, so feel free to ignore any of the following.
The key question I have is the correctness of the section:
In a sense, ACDT can be seen as anterior to CDT. How do we know that causality exists, and the rules it runs on? From our experience in the world. If we lived in a world where the Newcomb problem or the predictors exist problem were commonplace, then we'd have a different view of causality.
It might seem gratuitous and wrong to draw extra links coming out of your decision node - but it was also gratuitous and wrong to cut all the links that go into your decision node. Drawing these extra arrows undoes some of the damage, in a way that a CDT agent can understand (they don't understand things that cause their actions, but they do understand consequences of their actions).
I don't quite see why the causality is this flexible and arbitrary. I haven't read Causality, but think I get the gist.
It's definitely convenient here to be uncertain about causality. But it would be similarly convenient to have uncertainty about the correct decision theory. A similar formulation could involve a meta-decision-algorithm that has tries different decision algorithms until one produces favorable outcomes. Personally I think I'd be easier to be convinced that acausal decision theory is correct than that a different causal structure is correct.
Semi-related, one aspect of Newcomb's problem that has really confused me is the potential for Omega to have scenarios that favor incorrect beliefs. It would be arbitrary to imagine that Newcomb would offer $1,000 only if it could tell that one believes that "19 + 2 = 20". One could solve that by imagining that the participant should have uncertainty about what "19 + 2" is, trying out multiple options, and seeing which would produce the most favorable outcome.
If it's encountered the Newcomb problem before, and tried to one-box and two-box a few times, then it knows that the second graph gives more accurate predictions
To be clear, I'd assume that the agent would be smart enough to simulate this before actually having it done? The outcome seems decently apparent to me.