Project ideas: Epistemics
post by Lukas Finnveden (Lanrian) · 2024-01-05T23:41:23.721Z · LW · GW · 4 commentsThis is a link post for https://lukasfinnveden.substack.com/p/project-ideas-epistemics
Contents
4 comments
4 comments
Comments sorted by top scores.
comment by CBiddulph (caleb-biddulph) · 2024-01-07T07:58:29.133Z · LW(p) · GW(p)
On the topic of AI for forecasting: just a few days ago, I made a challenge on Manifold Markets to try to incentivize people to create Manifold bots to use LLMs to forecast diverse 1-month questions accurately, with improving epistemics as the ultimate goal.
You can read the rules and bet on the main market here: https://manifold.markets/CDBiddulph/will-there-be-a-manifold-bot-that-m?r=Q0RCaWRkdWxwaA
If anyone's interested in creating a bot, please join the Discord server to share ideas and discuss! https://discord.com/channels/1193303066930335855/1193460352835403858
comment by aogara (Aidan O'Gara) · 2024-02-21T22:06:58.807Z · LW(p) · GW(p)
An interesting question here is "Which forms of AI for epistemics will be naturally supplied by the market, and which will be neglected by default?" In a weak sense, you could say that OpenAI is in the business of epistemics, in that its customers value accuracy and hate hallucinations. Perhaps Perplexity is a better example, as they cite sources in all of their responses. When embarking on an altruistic project here, it's important to pick an angle where you could outperform any competition and offer the best available product.
Consensus is a startup that raised $3M "Make Expert Knowledge Accessible and Consumable for All" via LLMs.
comment by aogara (Aidan O'Gara) · 2024-02-20T22:44:09.403Z · LW(p) · GW(p)
Another interesting idea: AI for peer review.
comment by Richard_Kennaway · 2024-01-06T15:03:00.274Z · LW(p) · GW(p)
On the negative side, here are three ways AI could reduce the degree to which people have accurate beliefs.
Another way is already happening, and forecast here [LW · GW] to happen a lot more with only modest, near-term increases in performance: people attributing inner humanity to AIs that produce the outward form.