Do you have a satisfactory workflow for learning about a line of research using GPT4, Claude, etc?
post by ryan_b · 2023-11-08T18:05:18.442Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 3 rotatingpaguro None 2 comments
I'm trying to learn about plasma separation techniques, and I just stumbled on a line of papers that don't seem particularly connected to the other lines of research I was looking into, and I would like to quickly get a sense of what it says without having to go through everything the old fashioned way.
What I wanted to do was use Claude or GPT4 to submit a pile of these papers (the recent paper I found, a big chunk of its bibliography, and a chunk of those bibliographies) and then try asking questions about the body of the research in a way similar to how people do it with books. I have a sense it should be possible to use the AI to create a sort of interactive poor man's review paper this way.
So my speculative workflow looks like this:
1. Start with the paper I found.
2. Find other papers that cite it or papers it cites.
3. Get pdf versions of these papers via arxiv/libgen/sci-hub.
4. Load these pdfs into Claude/GPT4/whatever.
5. Have a conversation with the AI about the papers.
Has anyone tried this or something similar, and if so how did it work for you?
Answers
Partial answer: I input Solomonoff (2009), "Algorithmic Probability: Theory and Applications" into ChatGPT and Claude, ansking the following question:
The attached article describes Solomonoff induction. Does it answer the question if there's always an UTM that induces a Solomonoff prior that approximates arbitrarily well a given positive distribution?
ChatGPT annoyingly waffles back what's in the article, as it often did when I tried it on such things in the past. Claude instead is concise and precise. Complete answers follow.
Claude:
No, the article does not directly address the question of whether there always exists a universal Turing machine (UTM) that induces a Solomonoff prior that approximates arbitrarily well a given positive distribution.
The article focuses on describing Solomonoff induction and algorithmic probability more generally. It discusses properties like completeness, incomputability, diversity, and subjectivity of the Solomonoff prior, as well as applications to sequence prediction and grammar induction.
But it does not analyze the specific question of whether for any positive distribution there exists a choice of UTM that makes the corresponding Solomonoff prior arbitrarily close to that distribution. The article does state that the choice of reference machine affects the Solomonoff prior, and that by modifying the computer language it is possible to "express any conceivable a priori probability distributions." However, it does not formally prove that any positive distribution can be approximated arbitrarily well.
So in summary, while the article provides good background on Solomonoff induction, it does not directly address or prove the specific claim in the question. The universality of approximating arbitrary positive distributions with a Solomonoff prior by choosing the right UTM is not explicitly analyzed.
ChatGPT:
The article "Algorithmic Probability—Theory and Applications" by Ray J. Solomonoff discusses the concept of Algorithmic Probability (ALP) and its properties, including its completeness and incomputability. ALP is described as a method that is theoretically capable of discovering any regularity in a dataset using a relatively small sample. This feature, referred to as the completeness of ALP, implies that it can approximate the true probabilities of symbols in data generated by any unknown probabilistic source P, with a very small error rate. The error in probability approaches zero more rapidly than 1/n, indicating rapid convergence to correct probabilities. This convergence holds for any P describable by a computer program, including many functions that are formally incomputable.
However, ALP is also characterized by its incomputability, meaning it is impossible to find the truly best models with any certainty due to the infinity of models to be tested, some of which take an unacceptably long time to evaluate. Despite this, ALP's incomputability does not inhibit its practical use for prediction. In practice, approximations to ALP can be used, though it is impossible to determine how close these approximations are to the true ALP. The practical approach involves using approximations of ALP and methods to estimate their effectiveness in future problems (Out of Sample Error), methods to search for good models, and quick and simple methods to compare models. Despite its theoretical incomputability, ALP's approximations can approach completeness, offering little error in prediction if sufficient time is spent searching for good models. This aspect makes ALP a desirable feature in practical prediction systems despite its incomputable nature.
In summary, the article does address the question of whether there's always an Universal Turing Machine (UTM) that induces a Solomonoff prior approximating arbitrarily well a given positive distribution. It explains that while ALP in its true form is incomputable, in practice, approximations of ALP can be employed effectively. These approximations can approach the completeness of ALP, thereby allowing for accurate predictions and the discovery of regularities in data, albeit without the ability to ascertain how close these approximations are to the true ALP. This implies that while a UTM can be used to induce a Solomonoff prior, the exactness of this approximation in practice is limited by the incomputability of ALP and relies on the effectiveness of the approximation methods employed.
2 comments
Comments sorted by top scores.
comment by rotatingpaguro · 2023-11-08T19:44:47.845Z · LW(p) · GW(p)
I mention connectedpapers.com to find related papers not just through citations.
Replies from: ryan_b↑ comment by ryan_b · 2023-11-08T21:02:57.535Z · LW(p) · GW(p)
That is indeed the tool I use for the purpose! Strongly recommended. They also did a public announcement here [LW · GW].