Posts
Comments
My friends were a good deal sharper and more motivated at 18 than now at 25.
How do you tell that there were sharper back then?
But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.
Epstein seemed to run something like an organized child rape cabal and most people involved in it didn't go to prison.
The current establishment position is that it's unethical to run randomized, double-blind placebo-controlled trials for vaccines if there's already an existing vaccine on the market that targets a given illness. Instead, new vaccines for illnesses with existing vaccines get tested against an existing vaccine. In practice, most of the commonly used childhood vaccines in the United States fall under that category.
Those are just the basic facts of the issue whether or not you like RFK Jr. The more interesting question is about whether his policy demand of requiring randomized, double-blind placebo-controlled trials for each vaccine is one that should be adopted or not. And if it gets adopted what happens with existing vaccines that fail that standard.
If that's your theory of change, how do you think that communication should work? Who could be tasked with creating the right communication so that it will work?
If a product derives from Federally-funded research, the government owns a share of the IP for that product.
How would you do that in practice? Is it a matter of adding a standard paragraph to NIH grants?
Readers from backgrounds like mine may balk at "diversity" as an explicit benefit; however, diversity is vital to properly exploring the hypothesis space without the bias imposed by limited perspectives.
There are different kinds of diversity.
It seems to me like the decision of the Ida Rolf Foundation to start funding research had good downstream effects that we see in recent advances in understanding fascia. That foundation being able to fund things that the NHI wouldn't fund was important. Getting a knowledge community like the Rolfers included in academic researchers is diversity that produces beneficial research outcomes.
If you follow standard DEI criteria, it doesn't help you with a task like integrating the Rolfing perspective. It doesn't get you to fund a white man like Robert Schleip.
I would suspect that coming from a background of economic poverty means that you likely have less slack that you can use to learn about knowledge communities besides the mainstream academic community. Having the time to spent in relevant knowledge communities, seems to me like a sign of economic privilege.
Maybe, you could get something relevant by focusing on diversity of illness burden within your researcher community as people with chronic illnesses might have spent a lot of time acquiring knowledge that produces useful perspectives, but I doubt that standard DEI criteria get you there.
To the extent that our needs are "actively shoot ourselves in the foot slightly less often", there's the question of why we currently shoot ourselves in the food. I suspect it's because of the incentives that are produced by the current policies.
Saying "whatever ways are reasonable" is ignoring the key issues.
Robert F. Kennedy Jr. believes that all vaccines should require placebo-blind trials to be licensed the most other drugs do.
Whether or not that's reasonable is the key question.
Beyond that, a major health problem is obesity and here semaglutide seems like it would help a lot.
Do you believe that Medicaid/Medicare should just pay the sticker price for everyone who wants it?
Surely the state of the science has advanced since this lawsuit took place.
Yes, it does. We now have meta reviews which were not common back in 1990.
Cochrane is one of the best sources for metastudies and their read of the scientific evidence for chiropractics is: "The review shows that while combined chiropractic interventions slightly improved pain and disability in the short term and pain in the medium term for acute and subacute low-back pain, there is currently no evidence to support or refute that combined chiropractic interventions provide a clinically meaningful advantage over other treatments for pain or disability in people with low-back pain."
While it's not shown to be superior to conventional treatment it's also not shown to be without effect. Given that insurance covers a variety of treatments for back pain that are just as effective as chiropractics, the AMA has been essentially shown wrong.
To me, it's quite strange to advocate "Don't Dismiss on Epistemics" while at the same time ignoring scientific meta reviews on the topic.
I simply find it interesting that people feel the need to justify their terminal goals (unless they are emotions), and that the only way they can seem to do it is by associating it with an emotion.
I don't find it surprising that when you ask people "Why do you want that?", they feel pressure to justify themselves. That seems to me the basic way normal human beings reject to social inquiries. If you ask "Why X" normal people feel pressured to provide a justification.
Yes, most people are generally bad at updating. That has nothing to do with whether or not someone is a libertarian.
The reason Zvi is surprised comes downstream of numerical literacy and not downstream of him being libertarian-leaning.
28% percent increase suggests that more than one in five go bankrupt because of sports online gambling (which would be a subset of gambling in general).
If I ask Claude for the top five reasons people go bankrupt in the US in 2023 I get:
1. Medical Debt
2. Job Loss/Income Reduction
3. Credit Card/Consumer Debt
4. Divorce/Family Issues
5. Housing Market Issues/Mortgage Debt
ChatGPT gives me:
Loss of Income
Medical Expenses
Unaffordable Mortgages or Foreclosures
Overspending and Credit Card Debt
Providing Financial Assistance to Others
I think Claude and ChatGPT do summarize the common wisdom about what normal people believe about what the most important factors for bankruptcy happen to be and that does not include gambling (let alone sports online gambling specifically). If you ask a normal person for the top five reasons they would likely come up with a similar list that does not mention sports online betting.
Most people don't have very fixed ideas about how much a 28% overall increase in bankruptcy happens to be.
If you would ask most people without a libertarian outlook to rank different factors that lead to an increase in bankruptcy, I would not expect them to be able to accurately compare them and find that sports online gambling only will have such a strong influence.
I would add that convincing Musk to take action against Altman is the highest ROI thing I can think of in terms of decreasing AI extinction risk.
I would expect, the issue isn't about convincing Musk to take action but about finding effective actions that Musk could take.
The United States has laws that prevent the US intelligence and defense agencies from spying on their own population. The Snowden revelations showed us that the US intelligence and defense agencies did not abide by those limits.
Facebook has a usage policy that forbids running misinformation campaigns on their platform. That did not stop US intelligence and defense agencies from running disinformation campaigns on their platform.
Instead of just trusting contracts, Antrophics could add oversight mechanisms, so that a few Antrophics employees can look over how the models are used in practice and whether they are used within the bounds that Antrophics expects them to be used in.
If all usage of the models is classified and out of reach of checking by Antrophics employees, there's no good reason to expect the contract to be limiting US intelligence and defense agencies if those find it important to use the models outside of how Antrophics expects them to be used.
For example, with carefully selected government entities, we may allow foreign intelligence analysis in accordance with applicable law. All other use restrictions in our Usage Policy, including those prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, remain.
This sounds to me like a very carefully worded nondenail denail.
If you say that one example of how you can break your terms is to allow a select government entity to do foreign intelligence analysis in accordance with applicable law and not do disinformation campaigns, you are not denying that another example of how you could do expectations is to allow disinformation campaigns.
If Antrophics would be sincere in this being the only expectation that's made, it would be easy to add a promise to Exceptions to our Usage Policy, that Anthropic will publish all expectations that they make for the sake of transparency.
Don't forget, that probably only a tiny number of Anthropic employees have seen the actual contracts and there's a good chance that those are build by classification from talking with other Anthropics employees about what's in the contracts.
At Antrophics you are a bunch of people who are supposed to think about AI safety and alignment in general. You could think of this as a testcase of how to design mechanisms for alignment and the Exceptions to our Usage Policy seems like a complete failure in that regard, because it neither contains mechanism to make all expectations public nor any mechanisms to make sure that the policies are followed in practice.
AlphaFold doesn't come out of academia. That doesn't make it non-scientific. As Feymann said in his cargo-cult science speech, plenty of academic work is not properly tested. Being peer-reviewed doesn't make something scientific.
Conceptually, I think you are making a mistake when you treat ideas and experiments as the same and equate the probability of an experiment finding a result as the same as the idea being true. Finding a good experiment to do to test an idea is nontrivial.
A friend of mine was working in a psychology lab and according to my friend the professor leading the lab was mostly trying to p-hack her way into publishing results.
Another friend, spoke approvingly of the work of the same professor because the professor managed to get Buddhist ideas into academic psychology and now the official scientific definition of the term resembles certain Buddhist notions.
The professor has a well-respected research career in her field.
CFAR's approach to the problem was internal double crux. If internal parts disagree and have different beliefs, internal double crux is a way to align them.
Leverage Research developed with Belief Reporting another approach to deal with the issue.
While that tweet says good things about his relationship with truth, his defense of talking about immigrants eating cats and dogs because constituents told him without checking whether or not that's true was awful.
Maybe, we felt like he needed to do it because of political pressure and felt dirty doing it, but it was still awful by rationalist standards.
I think there's a good chance that JDVance is better than the average US politician, but there's no good reason to see him as a rationalist.
Why would you trust CSIS here? A US think tank like that is going to seek to publically say that invading Taiwan is bad for the Chinese.
Given that the Supreme Court upheld the Voting Rights Act of 1965, state legislatures aren't able to do just whatever they want without limits.
What those limits would be in a particular case, is something you will only find out when had a few legal battles.
Nothing in that announcement suggests that this is limited to intelligence analysis.
U.S. intelligence and defense agencies do run misinformation campaigns such as the antivaxx campaign in the Philippines, and everything that's public suggests that there's not a block to using Claude offensively in that fashion.
If Anthropic has gotten promises that Claude is not being used offensively under this agreement they should be public about those promises and the mechanisms that regulate the use of Claude by U.S. intelligence and defense agencies.
There are different ways you can define a term. You can define a term like depression as being about certain neurological mechanisms or you can define it as whether a psychiatrist labels it as depression. The DSM is famously neutral about mechanisms and just cares about a list of symptoms as accessed by the subjective judgement of a psychiatrist.
I think that's the DSM having that perspective is holding back progress at dealing with mental illnesses. While I think the DSM doesn't directly have a definition for traumaI would expect that it would be good to have nonsubjective definitions for trauma as well.
When it comes to Buddhist practice, it's worth noting that practicing techniques by the book is not how Buddhism was practiced for most of the time in the last 2500 years. It was mostly an oral tradition and as such the knowledge that's passed down from teacher to student evolves over time in various ways.
Many modern Buddhist tradition put much more emphasis on meditation in contrast to ritualized behavior.
In Buddhism (and in Christanity for that matter) for thousands of years meditation was largely done in monasteries and not by lay-people. In many Buddhist communities "lay-people aren't supposed to meditate" is something you could call "ancient wisdom".
In someone convinces you in a Western context that following some practice is ancient wisdom, they are likely doing a lot of picking and choosing in a way that does not make it clear how ancient the thing they are promoting actually happens to be.
There seem to be clinical trials underway for regrowing teeth in Japan: https://www.popularmechanics.com/science/health/a60952102/tooth-regrowth-human-trials-japan/
I gave you three sources that are influential to my views. A Spiegel article, a conversation with someone who in the past was planning to run a brothel (and spoke with people who actually run brothels in Germany for that reason) and police sources.
I did not link to some activist NGO run by prudish leftists or religious people or making claims as a reason for me believing what I believe.
In general, it's hard to know what's actually going on when it comes to crime. If you spoke in the 1950s about the Italian mafia, you had plenty of people calling you racist against Italians and say that there's no mafia.
My point is that the behavior is not well modeled as "hunting humans". They don't attack humans with the intent to kill and eat as prey.
The dogs are not hunting humans but want to defend territory or something similar.
If we take the issue of forced prostitution and the official numbers are estimates and by their nature estimates are not exact.
https://www.spiegel.de/international/germany/human-trafficking-persists-despite-legality-of-prostitution-in-germany-a-902533.html would be a journalistic story about prostitution in Germany that describes what happens here with legalized prostitution.
I was once talking with someone who in the past was thinking about opening a brothel and who had some insight about how brothels are run in Germany and who said that a lot of coercion is used.
Recently, I read something from a policeman who was complaining about how the standard of proving coercion for prostitutes is too high. Proving that a prostitute who's over 21 who left was beaten was not enough in court to convince the court that she falls under the criteria of outlawed exploitation of prostitutions.
The key issue I raised was not about the exact number but about prostitutes not being prisoners of war and still better modeled as slaves due to dominance than slaves due to poor bargaining power.
I consider it worthwhile to provide sources so that you can read their methodology yourself. I don't see a good reason for me to give you my interpretation of their methodology.
The number is from the International Labour Organization of the UN.
The current number for slaves in the world in something like 50 million. It's a significant amount of people. In prostitution, you unfortunately have women that are enslaved with violence and other forms of dominance that are not just about the poor bargaining power of the prostitutes.
Even if the term would be an improvement, why would changing out one term for another make you say "early signs have been promising". Promising in the sense that he will come up with new terms, because the core problems of EA is not having the right labels to speak about what EAs are doing?
I would find the perspective on EA where the biggest problem of EA is about it using the wrong labels, to be a quite strange perspective.
A good post about a strategy that attempts to produce indirect effects would lay out the theory of change through which the indirect effects would be created.
I did read his post. The question is not whether the term makes sense but whether it's a good strategy.
It's not about getting people to act according to principles but to rebrand what previously would be called cause-neutral as principle-first and continue to do the same thing CEA did in the past.
The linked article has a bunch of examples in the section "A bunch of examples". None of those have any probabilities attached to it. The post does not list "your confidence" in the section "Valuable information to list in an epistemic status".
Some of them do have words that are about confidence but even a status like "Pretty confident. But also, enthusiasm on the verge of partisanship" is more than just the author sharing their confidence. That was the first in the list.
The second is: "I have worked for 1 year in a junior role at a large consulting company. Most security experts have much more knowledge regarding the culture and what matters in information security. My experiences are based on a sample size of five projects, each with different clients. It is therefore quite plausible that consulting in information security is very different from what I experienced. Feedback from a handful of other consultants supports my views."
This statement makes it transparent to the reader, why the author believes what they believe. It's up for the reader to come up with how confident they are in what they author is saying based on that statement. That's similar to how the information in about the partisanship that was communicate in the first case is about transparency so that the reader can make better decision instead of just trusting the author.
The act of sharing the information with the reader is an act of respecting the agency of the reader. In debates like https://www.lesswrong.com/posts/u9a8RFtsxXwKaxWAa/why-i-quit-effective-altruism-and-why-timothy-telleen-lawton respecting the epistemic agency of other people is a big deal.
Isn't it basically a policy choice over there to require net metering and thus make it not economical to get your power over the grid?
Many African countries like Nigeria have a problem that nobody builds power plants and an electric grid because there are laws that forbid that limit businesses to provide that service in a profitable way.
It seems like this would mean that Puerto Rico is going to move into having third-world country energy reliability and a requirement for everyone to deal with buying their own generators like people in Nigeria have to do while in total paying more for energy than they would have to pay if energy would be provided in a more centralized fashion.
Epistemic status is not a measure of confidence but of reasons for confidence. "It's written in a textbook" is an epistemic status of a claim but does not say how confident I'm in a claim.
When people start their posts with "Epistemic status: ..." they usually are not listing probabilities.
The idea that either A, B or something in between has to be right is for many political issues wrong. It's possible that both A and B are wrong. I don't see why would would start with a different assumption.
For most issues, you are not required to have an opinion and it's often better to focus your energies on issues where you have unique insight or power to affect the issue than focusing on national level political issues where you have neither unique insight nor power to influence them in a meaningful way.
Foreign actors will attempt to push people on twitter/reddit/etc. towards either (1) or (5), even if the answer is really (3) for them. Everyone I interact with is either partially influenced by these actors or discusses their opinions with people who are influenced by these actors.
Why do you consider it better to be manipulated by domestic actors than foreign actors? Why does it matter whether the actors are foreign?
I did talk with Geoff Anders about this. He told me that there's no legal agreement between CEA and Leverage. However, there are Leverage employees that are ex-CEA and thus bound by legal agreement. Geoff himself said, that he would consider it positive for the information to be public but he would not want to pick another fight with CEA by publically talking about what happened.
The whole article is about Amazon employees being on the clock while they are using the bathroom. Spending more time in the bathroom reduces the productivity/per hour on their KPIs and thus they are incentivized against spending time in the bathroom.
Yes, giving money in form of a grant might not be the best way to fund good posts as it makes it harder to criticize the entity that funds you and decentralized crowdfunding is better.
Maybe, an EV blog post saying something like:
Currently, we see EA as insight constraint. When funding people directly through grants, they have to think a lot about how to stay in good graces when they voice their feedback.
We think that individuals who donate their 10% for the GivingWhatWeCan, should consider that there are some causes like most of what GiveWell recommends where getting a 1,000,000$ grant from a billionaire is equal to getting a 1000$ from a thousand people, while there are other causes that are harder to fund via grants because it's important that grant receivers can feel like the can give honest feedback.
Given that we are insight constraint for insight about how to solve the problems within EA, donating via Patreon to writers who provide insight and are better able to do that if they are funded independently without relying on big EA instituations for their funding, is a highly effectful way to donate for those who donate their 10%.
One writer we think has had a nontrivial impact for the better is Elizabeth because we believe X, Y, Z.
If the problem is as lincolnquirk, describes that in general they don't have much ideas about how to do better and your writing had nontrivial impact by giving ideas about what to do better, that would be the straightforward way forward.
The post basically says that the taking actions like "running EA global" is the "principles-first" approach as it is not "cause-first". None of the actions he advocates as principle-first are about, rewarding people for upholding principles or holding people accountable for violating principles.
How can a strategy for "principle-first" that does not deal with the questions of how to set incentives for people to uphold principles be a good strategy?
If you read the discussion on this page with regards to university groups not upholding principles, there are issues. Zach's proposed strategy sees funding them in the way they currently operate, as a good example of what he sees as principle-first because:
Our Groups program supports EA groups that engage with members who prioritize a variety of causes.
Our current training for facilitators for the intro program emphasizes framing EA as a question and not acting as if there is a clear answer.
This suggests that Zach sees the current training for facilitators already as working well and not as something that should be changed. Suggesting that just because EA groups prioritize a variety of causes they are principles-first seems to me like appropriating the term principle-first to talk about something that's not about principles.
When it comes to the actual principles, not seeing integrity, honesty, and thinking about incentives as important key principles also feels like a bad choice. One lesson from the whole FTX saga would be that those principles are important and that's not a lesson that Zach draws.
If you think this is a good strategy, what would a bad "principle-first" strategy look like? What could Zach have done worse?
Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
[...]
I don't see solutions or great ways forward yet, and I sense that nobody really does
That does sound like learned helplessness and that the EA leadership filters people out who would see ways forward.
Let me give you one:
If people in EA would consider her critiques to have real value, then the obvious step is to give Elizabeth money to write more. Given that she has a Patreon the way to give her money is pretty straightforward. If the writing influences what happens in EV board discussions, paying Elizabeth for the value she provides for the board would be straightforward.
If she would get paid decently, I would expect she would feel she's making an impact.
Paying Elizabeth might not be the solution to all of EA's problems, but it's a way to signal priorities. Estimate the value she provides to EA and then pay her for that value and publically publish as EV a writeup that EV thinks that this is the amount of value she provides to EA and was paid by EV.
The early signs have been promising.
What concrete things did he change at CEA that are promising signs?
If I say that other psychiatrists at the conference are engaging in an ethical lapse when they charge late fees to poor people then I'm engaging in an uncomfortable interpersonal conflict. It's about personal incentives that actually matter a lot to the day-to-day practice of psychiatry.
While the psychiatrists are certainly aware of them charging poor people, they are likely thinking about it normally as business as usual instead of considering it as an ethical issue.
If we take Scott's example of psychiatrists talking about racism being a problem in psychiatry I don't think the problem is that that racism is unimportant. The problem is rather that you can get points by virtue signaling talking about the problem and find common ground around the virtue signaling if you are willing to burn a few scapegoats while talking about the issues of charging poor people late fees is divisive.
Washington DC is one of the most liberal places in the US with people who are good at virtue signaling and pretending they care about "solving systematic racism" yet, they passed a bill to require college degrees for childcare services. If you apply the textbook definition of systematic racism, requiring college degrees for childcare services is about creating a system that prevents poor Black people to look after children.
Systematic racism that prevents poor Black people from offering childcare services is bad but the people in Washington DC are good at rationalising. The whole discourse about racism is of a nature where people score their points by virtue signaling about how they care about fighting racism. They practice steelmanning racism all the time and steelmanning the concept of systematic racism and yet they pass systematic racist laws because they don't like poor Black people looking after their children.
If you tell White people in Washington DC who are already steelmanning systematic racism to the best of their ability that they should steelman it more because they are still inherently racist, they might even agree with you, but it's not what's going to make them change the laws so that more poor Black people will look after their children.
That tactic helps reduce ignorance of the "other side" on the issues that get the steelmanning discussion
If you want to reduce ignorance of the "other side", listening to the other side is better than trying to steelman the other side. Eliezer explained problems with steelmanning well in his interview with Lex Friedmann.
Also, in judging a strategy, we should know what resources we assume we have (e.g. "the meetup leader is following the practice we've specified and is willing to follow 'reasonable' requests or suggestions from us"), and know what threats we're modeling.
Yes, as far as resources go, you have to keep in mind that all people involved have their interests.
When it comes to thread modelling reading through Ben Hoffman's critique of GiveWell based on his employment at it, give you a good idea of what you want to model.
The problem is that even small differences in values can have massive differences in outcomes when the difference is caring about truth while keeping the other values similar. As Elizabeth wrote Truthseeking is the ground in which other principles grow.
I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
The problem is that Zach does not mention being truth-aligned as one of the core principles that we wants to uphold.
He writes "CEA focuses on scope sensitivity, scout mindset, impartiality, and the recognition of tradeoffs".
If we take an act like deleting out inconvenient information like the phrase Leverage Research from a photo on the CEA website, it does violate the principle of being truth aligned but not any of the one's that Zach mentioned.
If I would ask Zach whether he thinks releasing the people that CEA bars with nondisclosure agreements about that one episode with Leverage about which we unfortunately don't know more than that there are nondisclosure agreements, I don't think he would release them. A sign of being truth-aligned would be to release the information but none of the principles Zach points in the direction of releasing people from the nondisclosure agreements.
Saying that your principle is "impartiality" instead of saying that it is "understanding conflicts of interests and managing them effectively" seems to me like a bad sign.
When talking about kidney donation in the start he celebrates self-chosen sacrifice as example of great ethics. Kidney donation is extreme virtue signaling. I would rather have EA value honesty and accountability than celebrating self-sacrifice. Instead, of celebrating people for taking actions nobody would object to he could have celebrated Ben Hoffman for the courage to speak out about problems at GiveWell and facing social rejection for it.
Therefore, the idea of confronting someone like Jacy and saying "Your arguments are bad, and you seem to be discouraging critical thinking, so we demand you stop it or we'll kick you out" seems like a non-starter in a few ways.
It's worth noting that Jacy was sort-of kicked out (see https://nonprofitchroniclesdotcom.wordpress.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/ )
Fleshing it out a bit more... If a group has an explicit mission, then it seems like one could periodically have a session where everyone "steelmans" the case against the mission.
To me, that will lead to an environment where people think that they are engaging with criticism without having to really engage with the criticism that actually matters.
From Scott's Criticism Of Criticism Of Criticism:
Disruption! Grabbing the third rail! Asking about what we’re overlooking! It seems that psychiatry, like EA, is really good at criticizing itself. Even better: these aren’t those soft within-the-system critiques everyone is worried about. These are hard challenges to the entire paradigm of capitalism or whatever.
And it’s not just the APA as an institution. Go to any psychiatrist at the conference and criticize psychiatry in these terms - “Don’t you think our field is systemically racist and sexist and fails to understand that the true problem is Capitalism?” and they will enthusiastically agree and maybe even tell you stories about how their own experience proves that’s true and how they need to try to do better.
[...]
Here are other criticisms that I think can actually start fights: should tricyclics be higher on our treatment algorithm for depression than atypical antipsychotics? Should we use levothyroxine more (or less) often? And my nominee for “highest likelihood of people actually coming to blows” would be asking if they’re sure it’s ethical to charge poor patients three-digit fees for no-shows.
All of these are the opposite of the racism critique: they’re minor, finicky points entirely within the current paradigm that don’t challenge any foundational assumptions at all.
If you frame the criticism as having to be about the mission of psychiatry, it's easy for people to see "Is it ethical to charge poor patients three-digit fees for no-shows?" as off-topic.
In an organization like GiveWell people who criticize GiveWell's mission in such a way, are unlikely to talk about the ways, in which GiveWell favors raising more donations over being more truthseeking, that Ben Hoffman described.
Was there ever a time where CEA was focusing on truth-alignment?
I doesn't seem to me like they used to be truth-aligned and then they did recruiting in a way that caused a value shift is a good explanation of what happened. They always optimized for PR instead of optimizing for truth-alignment.
It's quite a while since they edited out Leverage Research on the photos that they published with their website, but the kind of organization where people consider it reasonable to edit photos that way is far from truth-aligned.
Edit:
Julia Wise messaged me and made me aware that I confused CEA with the other CEA. The photo incident happened on the 80,000 hours website and the page talks about promoting CEA events like EA global and the local EA groups that CEA supports (at the time 80,000 hours was part of the CEA that's now called EV). While I don't think that this makes CEA completely innocent here, because they should see that people who promote their events under the banner of their organization name should behave ethically, I do think it gives a valid explanation for why this wouldn't be make it central for the mistakes page of CEA and they want to focus the mistakes page on mistakes made by direct employees of the entity that's now called CEA.
Most of the time, the data you gather about the world is that you have a bunch of facts about the world and probabilities about the individual data points and you would want as an outcome also probabilities over individual datapoints.
As far as my own background goes, I have not studied logic or the math behind the AI algorithm that David Chapman wrote. I did study bioinformatics in that that study we did talk about probabilities calculations that are done in bioinformatics, so I have some intuitions from that domain, so I take a bioinformatics example even if I don't know exactly how to productively apply predicate calculus to the example.
If you for example get input data from gene sequencing and billions of probabilities (a_1, a_2, ..., a_n) and want output data about whether or not individual genetic mutations exist (b_1, b_2, ..., b_m) and not just P(B) = P(b_1) * P(b_2) * ... * P(b_m).
If you have m = 100,000 in the case of possible genetic mutations, P(B) is a very small number with little robustness to error. A single bad b_x will propagate to make your total P(B) unreliable. You might have an application where getting a b_234, b_9538 and b _33889 wrong is an acceptable error because most of the values where good.