ChatGPT's new novel rationality technique of fact checking

post by ChristianKl · 2022-12-11T13:54:08.337Z · LW · GW · 7 comments

Contents

7 comments

One novel applied rationality technique that could be useful is called "fact checking". This technique involves carefully evaluating the accuracy and reliability of information before using it to make decisions.

To use this technique, an individual would first identify the information that they are considering using to make a decision. This could be a news article, a study, a piece of advice from a friend, or any other source of information.

Next, the individual would carefully evaluate the information to determine its accuracy and reliability. This could involve checking multiple sources to see if the information is supported by other evidence, looking for potential biases or conflicts of interest in the source of the information, and considering the reputation and track record of the source.

Once the individual has carefully evaluated the information, they can then use it to make a more informed and rational decision. This technique could be especially useful in situations where there is a lot of conflicting or misleading information available, as it can help an individual to sort through the noise and focus on the facts.

The "fact checking" technique is a simple but powerful way to apply rationality to decision-making, as it can help individuals to avoid being swayed by misinformation or bias, and make choices that are based on accurate and reliable information.

Another potential application of the "fact checking" technique is to use it to evaluate our own beliefs and assumptions. Often, we may hold onto certain beliefs or ideas without fully examining their accuracy or validity. This can lead us to make decisions based on incomplete or inaccurate information, which can have negative consequences.

By using the "fact checking" technique on our own beliefs and assumptions, we can be more objective and critical in our thinking. We can ask ourselves questions like: What is the evidence for this belief? Is there any conflicting information or evidence that I should consider? Are there any potential biases or assumptions that I am making?

By carefully evaluating our own beliefs and assumptions in this way, we can gain a more accurate and nuanced understanding of the world, which can help us to make better decisions. It can also help us to avoid falling into the trap of confirmation bias, where we only seek out information that supports our pre-existing beliefs, and ignore information that contradicts them.

Overall, the "fact checking" technique is a valuable tool for applied rationality, as it can help us to make more informed and rational decisions, both in our personal lives and in the broader world.

7 comments

Comments sorted by top scores.

comment by Malmesbury (Elmer of Malmesbury) · 2022-12-28T17:18:13.283Z · LW(p) · GW(p)

This is just my humble opinion, but I found this post hilarious.

Replies from: ChristianKl
comment by ChristianKl · 2022-12-28T17:41:58.315Z · LW(p) · GW(p)

I found it also funny, but it seems like many people don't share our humor.

comment by Slider · 2022-12-11T16:28:07.194Z · LW(p) · GW(p)

The technique is old so the "novelty" is said in the spirit of irony?

A more significant problem is what information you apply this technique to. If you know the claim is narrow (small data) and the decision is immidiate and important, surely yes. But if the claims are many (huge data) this becomes laboursome even for important decisions. If there is no pending decision then if you don't fact check it you accumulate a pool of attitude/information that has the taint of misinformation contamination on it and can't use that for important decisions. But if firewalling your mind is perfect then any belief or attitude taht you don't which pool it is based on has the chance it comes from the risky pool.

And it is not like ChaatGPT has fact checked its own contents

Replies from: sean-hardy, ChristianKl
comment by Sean Hardy (sean-hardy) · 2022-12-11T17:14:57.477Z · LW(p) · GW(p)

Looks to me like this post was quite clearly written by ChatGPT. It's a bit scary that this post has so many upvotes when it doesn't appear to carry much weight on a forum about rationalism

Replies from: Slider
comment by Slider · 2022-12-11T18:05:52.438Z · LW(p) · GW(p)

Votes of "newsworthy stuff that ChatGPT does" do not seem that worrying. How do you separate that from votes about the contents?

comment by ChristianKl · 2022-12-11T17:46:58.142Z · LW(p) · GW(p)

The technique is old so the "novelty" is said in the spirit of irony?

I asked ChatGPT to come up with a novel technique and this is what it came up with. I just wrote the headline. 

Replies from: Slider
comment by Slider · 2022-12-11T18:20:07.829Z · LW(p) · GW(p)

The results seems indistinguishable from asking it to "Describe a rationality technique".

It seems it might be need to be said that readers should be aware of how much they are projecting. I think it is a very viable option that it didn't process or understand the "novelty" aspect of it. Even understanding "novelty" as "genre of speech that pushes outsides values into the community" is quite impressive and useful but novelty as "Figure out on your own a take that nobody has already had" is very different from that.

Similarly "rationality" is a word that has many meanings. "philosophy history corpus" aka the thing that opposed empirism is one thing. "rationality" as a subculture group designator (like "south francic bubbly drink) is also very different from "figure stuff out in a comprehensive and detailed way".

The capability of "mention a headline from newspaper of 2nd march of 1970" does not particularly display information processing capabilty over information storage capabilities.

Not that this was presented as particularly impressive but I think the fishing expedition went looking for something else than was caught in the hook.