Posts

What's the big deal about Bayes' Theorem? 2021-01-26T06:08:45.188Z

Comments

Comment by AVoropaev on OpenAI: Facts from a Weekend · 2023-11-20T16:03:25.577Z · LW · GW

What's the source of that 505 employees letter? I mean the contents aren't too crazy, but isn't it strange that the only thing we have is a screenshot of the first page?

Comment by AVoropaev on Monthly Roundup #7: June 2023 · 2023-06-06T18:59:40.228Z · LW · GW

Re: Tik-tok viral videos. I think that the cliff is simply because recent videos had too little time to be watched 10m times. The second graph in the article is not about the same for 0.1m views, but about average views per week (among videos with >0.1m views), which stays stable.

Comment by AVoropaev on LM Situational Awareness, Evaluation Proposal: Violating Imitation · 2023-04-27T11:36:25.482Z · LW · GW

I don't understand the point of questions 1 and 3.

If we forget about details of how model works, the question 1 essentially checks whether the entity in question have a good enough rng. Which doesn't seem to be particularly relevant? Human with a vocabulary and random.org can do that. AutoGTP with access to vocabulary and random.org also have a rather good shot. Superintelligence that for some reason decides to not use rng and answer deterministically will fail. I suppose it would be very interesting to learn that say GPT-6 can do it without external rng, but what would it tell us about it's other capabilities?

The question 3 checks for something weird. If I wanted to pass it, I'd probably have to precommit on answering certain weird questions in a particular way (and also ensure to always have access to some rng). Which is a weird thing to do? I expect humans to fail at that, but I also expect almost every possible intelligence to fail at that.

In contrast question 2 checks for something "which part of input do you find most surprising" which seems like a really useful skill to have and we should probably watch out for it.

Comment by AVoropaev on Stupid Questions - April 2023 · 2023-04-08T20:49:08.035Z · LW · GW

Yeah, you are right. It seems that it was actually one of the harder ones I tried. This particular problem was solved by 4 of 28 members of a relatively strong group. I distinctly remember also trying some easy problems from a relatively weak group, but I don't have notes and Bing don't save chat.

I guess I should just try again, especially in light of gwillen's comment. (By the way, if somebody with access to actual GPT-4 is willing to help me with testing it on some math problems, I'd really appreacite it .)

Comment by AVoropaev on Stupid Questions - April 2023 · 2023-04-08T20:31:34.758Z · LW · GW

That would explain a lot. I've heard this rumor, but when I tried to trace the source, i haven't found anything better than guesses. So I dismissed it, but maybe I shouldn't have. Do you have a better source?

Comment by AVoropaev on Stupid Questions - April 2023 · 2023-04-07T03:15:27.017Z · LW · GW

I agree that there are some impressive improvements from GPT-3 to GPT-4. But they seem to me a lot less impressive than jump from GPT-2 producing barely coherent texts to GPT-3 (somewhat) figuring out how to play chess.

I disagree with you take on LLM's math abilities. Wolfram Alpha helps with tasks like SAT -- and GPT-4 is doing well enough on them. But for some reason it (at least in the incarnation of Bing) has trouble with simple logic puzzles like the one I mentioned in other comment.

Can you tell more about success with theoretical physics concepts? I don't think I've seen anybody try that.

Comment by AVoropaev on Stupid Questions - April 2023 · 2023-04-07T00:30:05.264Z · LW · GW

I didn't say "it's worse than 12 yo at any math task". I meant nonstandard problems. Perhaps that's wrong English terminology? Sort of easy olympiad problem?

The actual test that I performed was "take several easy problems from a math circle for 12 y/o and try various 'lets think tep-by-step' to make Bing write solutions".

Example of such a problem:

Between 20 poles, several ropes are stretched (each rope connects two different poles; there is no more than one rope between any two poles). It is known that at least 15 ropes are attached to each pole. The poles are divided into groups so that each rope connects poles from different groups. Prove that there are at least four groups.

Comment by AVoropaev on Stupid Questions - April 2023 · 2023-04-06T23:14:46.613Z · LW · GW

Two questions about capabilities of GPT-4.

  1. The jump in capabilities from GPT-3 to GPT-4 seems like much much less impressive than jump from GPT-2 to GPT-3. Part of that is likely because later version of GPT-3 were noticeably smarter than the first ones, but that reason doesn't seem sufficient to me. So what's up? Should I expect that GPT-4 -> GPT-5 will be barely noticeable?
  2. In particular I am rather surprised at apparent lack in ability to solve nonstandard math problems. I didn't expect it to beat IMO, but I did expect that problems for 12 y/o would be accessible, and they weren't. (I personally tried only Bing, so perhaps usual GPT-4 is better. But I've seen only one successful attempt with GPT-4, and it was mostly trigonometry.). So what's up? I am tempted to say that math is just harder than economy, biology, etc. But that's likely not it.
Comment by AVoropaev on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-03-21T06:19:15.613Z · LW · GW

What improvements do you suggest?

Comment by AVoropaev on What DALL-E 2 can and cannot do · 2022-05-16T09:54:11.591Z · LW · GW

Can it in some way describe itself? Something like "picture of DALL-E 2".

Comment by AVoropaev on Ukraine Post #2: Options · 2022-03-13T01:03:04.211Z · LW · GW

#2: My impression is that something like 2%-10% of Ukrainian population believed that a month ago (would you consider that worrying enough?). My evidence for that is very shacky and it is indeed quite possible that I am overestimating it by an order of magnitude (still kind of worrying, though I might be overestimating even more).

First, my aunt is among them. Second, over last few years I've seen multiple (something like 5-10, concentrated around present date?) discussions on social media where friends of friends (all Russians) said that they believe in nazi-controlled Ukraine since their relatives in Ukraine in some or another way confirmed it (perhaps such relatives are predominantly from occupied territories?).

 Third, a lot of Russian families have close relatives in Ukraine (I can't find any statistics, but by eyeballing families of my friends, I'd say something like 1/3 in Moscow). If a lot of such relatives believed in Russian propaganda, that would explain so many Russians believe it as well (there are rumors that some are choosing to believe tv over their relatives, but I haven't personally witnessed any of that). And this "a lot of such relatives" don't need to be implausibly big, since "Ukrainians believing in Russian propaganda" are likely overrepresented among close relatives of Russians.

On #3 I would very much expect the opposite. People at LW are very good vs. such tactics in general, and are high-information, and have access to Western sources, and this stuff is optimized to appeal to people in the former USSR.

I agree with all your points, but I don't think that it is opposite of what I meant to say. When I was talking about being at disadvantage, I didn't mean that western lesswrongers that will visit this site will be more affected by it then average Russians. I meant that western lesswronger will have not only obvious advantages (that you listed), but also some disadvantages, perhaps less obvious to westerners (is "disadvantage" a wrong word to use here?). That's why I was talking about "underestimating danger" (another part of that was an attempt to make people even more cautious).

Yes, sure, the danger is not that big, but I wouldn't be surprised if it'll noticeably negatively affect at least 0.1% of lesswrongers who visit such site (obviously conditioned on a lot of them visiting such site), and I absolutely won't risk something like that just for curiosity.

Strangely, this site seems like it's an attempt to be a sane Russian-slanted source

I am following my own advice and haven't read their articles since like 2013 when they lost their independence (and haven't been a regular reader before that). But my not very educated guess would be that if your observation is correct, then it is one of news sources that initially were independent, then became government-controlled, and are still posing as mostly-inependent, e.g. lie only when it is important. Kind of optimized for highly educated opposition-leaning people in the former USSR.

Comment by AVoropaev on Ukraine Post #2: Options · 2022-03-12T22:54:56.719Z · LW · GW

Yes, I think that it is the most likely scenario. Still, it bothered me enough that I mentioned it -- I consider such omission 2-3 times more likely in a world where there are other important (intentional) omissions that I haven't noticed than in a world where he is honest.

I still think that reading Galeev is worth it and that he is trustworthy enough source. But if for example he'll make a thread on modern Russian opposition that doesn't mention Navalny, it'll be a huge red flag for me.

Comment by AVoropaev on Ukraine Post #2: Options · 2022-03-11T16:03:38.212Z · LW · GW

To clarify: this site contains very effective propaganda that makes it a cognitohazard. You are likely underestimating its danger. It is not "just a bunch of fake statements". It is "a bunch of statements optimized for inflicting particular effects on its readers". Such "particular effects" are not limited to believing in what news says. In fact, news regularly contradict what they said a few months ago even in peace time, so believing what they are literally saying is probably not the point.

Before reading propaganda consider that such materials:

1) Convinced a lot (a majority?) of Russians that Russian army is heroically fighting western nazis.

1.1) Not all such Russians are dumb -- some of them are rather smart, there are some scientists, etc.

2) Convinced some (a sizable minority?) of Ukrainians that they are living under nazi rule.

3) It is possible that you are at a disadvantage compared to all those people since you likely haven't encountered such propaganda before.
For example, there are a lot of contrmemes to government propaganda in Russian culture. Some of them are exploited by modern propaganda (All other media are also lying!), but I suspect that their effect is net positive, especially in more educated people.

Comment by AVoropaev on Ukraine Post #2: Options · 2022-03-10T22:21:02.162Z · LW · GW

As a Russian I confirm that everything that Galeev says seems legit. I haven't been following our politics that much, but Gallev's model of Putin's fits my observations.

The only thing that looked a little suspicious to me was the thread on Russian parliamentarism -- there was an opportunity to say something about Navalny's team there (e.g. as a central example of party that can't be registered or something about them organizing protests), and I expected that he would mention it, but he didn't. In fact, I don't think he ever mentioned Navalny in any of his threads. Why?

Comment by AVoropaev on Ukraine update 06/03/2022 · 2022-03-07T12:02:35.492Z · LW · GW

I think that if Lesswrong wants to be less wrong, then questions "why do you believe in that?" should not be downvoted.

As for the question itself, I know next to nothing about the situation on this NPP, but just from priors I'd give 70% that if someone shelled it, it was Russian army.

1) It is easier to shoot at NPP if you don't know what you re shooting at. Russian army is much more likely to mistake this target for something else.

2) p(Russian government lies that it wasn't them | it was them) > p(Ukrainian government lies it wasn't them | it was them)   (I believe in that since I believe that the left number is very very close to 1.)

3) I am under impression that Russian army uses a lot more artillery. It is somewhat less important for such important target (Ukrainian army is probably incentivized to concentrate their limited resources here), but probably still important.

I'd also like to hear an opinion of somebody who have more information about this.

Comment by AVoropaev on Ukraine Situation Report 2022/03/01 · 2022-03-03T12:45:41.743Z · LW · GW

Update: Prosecutor's General Office says that protest will be treated as "participation in radical group" which is up to 6 years. Probably won't be used too massively, at least initially.

Comment by AVoropaev on Ukraine Situation Report 2022/03/01 · 2022-03-02T13:06:38.269Z · LW · GW

Yeah, doesn't seem to be true. There is this law, and general attitude of treating posts on vk/facebook as a mass media -- but it is 'just' 3 years or a huge fine, and it is rarely enforced (yet). (There might be some other relevant laws that I don't know about, but I would be very surprised (and concerned) if they involved 10 year prison terms.) It might be wise to make some minimal precautions though -- like making all posts that are not meant to be read by tovaritch major "friends only".

Comment by AVoropaev on Seek Mistakes in the Space Between Math and Reality · 2022-03-02T12:08:49.732Z · LW · GW

Thank you for treating it as a "today's lucky 10,000" event. I am aware about quines (though not much more than just 'aware') and what I am worried about is whether people that created FairBot were careful enough.

Comment by AVoropaev on Seek Mistakes in the Space Between Math and Reality · 2022-03-02T12:05:08.643Z · LW · GW

"Definition" was probably a wrong word to use. Since we are talking in the context of provability, I meant "a short string of text that replaces a longer string of text for ease of human reader, but is parsed as a longer string of text when you actually work with it". Impredicative definitions are indeed quite common, but they go hand in hand with proofs of their consistency, like proof that a functional equation have a solution, or example of a group to prove that group axioms are consistent, or more generally a model of some axiom system.

Sadly I am not familiar with Haskell, so your link is of limited use to me. But it seems to contain a lot of code and no proofs, so it is probably not what I am looking for anyway.

What I am looking for probably looks like a proof of "". I am in many ways uncertain about whether this is the right formula (is GL a right system to use here (does it even support quantifiers over functional symbols? if not then there should be an extension that does support it); is "does f exist" the right question to be asking here; does "" correctly describe what we want rom FairBot). But some proof of that kind should exists, overwise why should we think that such FairBot exists/is consistent?

Comment by AVoropaev on Seek Mistakes in the Space Between Math and Reality · 2022-03-02T00:10:03.592Z · LW · GW

It's been ages since I studied provability logic, but those bots look suspicious to me. Have anybody actually formalized them? Like the definition of FairBot involves itself, so it is not actually a definition. Is it a statement that we consider true? Is it just a statement that we consider provable? Why won't adding something like this to GL result in contradiction?

Comment by AVoropaev on How to develop safe superintelligence · 2022-03-01T22:59:35.377Z · LW · GW

I'm no programmer, so I have no comment on "how to develop" part. The "safe" part seems extremely unsafe to me though.

1) Your strategy relies on human supervisor's ability to recognize a threat that is disguised by superintelligence. Which is doomed to failure almost by definition.

2) Supervisor himself is not protected from possible threat. He is also one of the main targets that AI would want to affect.

3) >Moreover, the artificial agent won’t be able to change the operational system of the computer, its own code or any offline task that could fundamentally change the system.
I don't see what kind of manual supervising could possibly accomplish that even if none of other problems existed.

4) Human experts don't have "complete understanding" of any subject worth mentioning. Certainly nothing involving biology. So your AI will just produce a text that convinces them that proposed solution is safe. Being superintelligent, it'll be able to do it even if the solution is not in fact safe. Or it might produce some other dangerous texts, like texts that convince them to lie to you that solution is safe.

Comment by AVoropaev on What’s Up With the CDC Nowcast? · 2021-12-22T17:37:31.288Z · LW · GW

I'm trying to see what makes those numbers so implausible, and as far as I understand (at least without looking into regional data) the most surprising/suspicious thing is that number of new cases of Delta is dropping too fast.

But why shouldn't it be dropping fast? Odds of people getting Omicron (as opposed to Delta) are growing fast enough -- if we assume that they are (# of Omicron cases)/(# of Delta cases)*(some coefficient like their relative R_0), then due to Omicrons's fast doubling it can go from 1:2 to 4:1 in just a week. That will make new Delta cases among the population for which Omicron and Delta compete (as in they are destined to get one or the other) drop from 66% to 20% -- more than three times.

In real world there are no people destined to get Covid. But there are unvaccinated people that go unmasked to a club with hundreds of other people like them -- and continue to do it until they get Covid. This and other similar modes of behavior seem like a close enough approximation of "people destined to get a covid". Is it close enough? Are there enough of people like that compared to people for whom Omicron and Delta don't compete that much? I don't know, quite possibly not.

Does it mean that in order to notice that nowcasts' data is suspicious, I must have some knowledge about how different variants compete with each other? Can someone ELIU to me how this competition happens? Am I missing something else?

Comment by AVoropaev on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-13T00:40:28.267Z · LW · GW

I don't see why all possible ways for AGI to critically fail to do what we have build it for must involve taking over the lightcone.

That doesn't stop other people from building one.

So let's also blow up the Earth. By that definition the alignment would be solved.

Comment by AVoropaev on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T23:28:09.212Z · LW · GW

When you say "create an AGI which doesn't do this" do you mean that it has about 0% probability of doing it or one that have less than 100% probability of doing it? 

Edit: my impression was that the point of alignment was producing an AGI that have high probability of good outcomes and low probability of bad outcomes. Creating an AGI that simply have low probability of destroying the universe seems to be trivial. Take a hypothetical AGI before it produced output, throw a coin and if its tails then destroy it. Voila, the probability of destroying the universe is now at most 50%. How can you even have device that is guaranteed to destroy universe if on early stages it can be stopped by sufficiently paranoid developer or a solar flare?

Comment by AVoropaev on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T22:56:35.869Z · LW · GW

To teach million kids you need like hundred thousand teachers from Dath Ilani. They don't currently exist.

It can be circumvented by first teaching say a hundred students, 10% of which becomes teachers and help teaching new 'generation'. If each 'generation' takes 5 years, and one teacher can teach 10 students in one generation, the amount of teachers will be multiplied by 2 every 5 years, and you'll get a million Dath Ilanians in like 50 years.

One teacher teaching 10 students and 1 of them becoming a teacher might be more possible than it seems. For example, if instead of Dath Ilani ways we speak about non-terrible level of math, then I've worked in a system that have 1 math teacher per 6-12 students, 3%-5% of students become teachers and generation takes 3-6 years.

The problem is, currently we have 0 Dath Ilani teachers.

Comment by AVoropaev on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T22:32:00.681Z · LW · GW

How would having AGI that have 50% chance to obliterate lightcone, 40% to obliterate just Earth and 10% to correctly produce 1000000 of paperclips without casualties solve the alignment?

Comment by AVoropaev on Where's my magic sword? · 2021-09-27T19:55:08.303Z · LW · GW

I think that since lady also said something about pharmacy, it's more likely that "lusi"="Luke's".

Comment by AVoropaev on Schools probably do do something · 2021-09-26T21:59:10.406Z · LW · GW

It' not about relative age (either as in age of one person divided by age of another or one age substracted from another), it's about their month of birth. So it's evidence for relevance of amount of received sunshine during pregnancy, relevance of age of being admitted in school and relevance of astrology.

Since it seems to somewhat align with different kinds of education starting in different times of year, my personal bet is on schools, though I wouldn't completely discount differences of pregnancies in different times of the year (sorry, astrology, but I need a lot more evidence to seriously consider you).

Comment by AVoropaev on The case for hypocrisy · 2021-05-13T23:13:51.805Z · LW · GW

But that's a fix to a global problem that you won't fix anyway. What you can do is allocate some resources to fixing a lesser problem "this guy had nothing to eat today".

It seems to me that your argument proves too much -- when faced with a problem that you can fix you can always say "it is a part of a bigger problem that I can't fix" and do nothing.

Comment by AVoropaev on The case for hypocrisy · 2021-05-13T20:41:56.453Z · LW · GW

What do you mean by 'real fix' here? What if said that real-real fix requires changing human nature and materialization of food and other goods out of nowhere? That might be more effective fix, but it is unlikely to happen in near future and it is unclear how you can make it happen. Donating money now might be less effective, but it is somehow that you can actually do.

Comment by AVoropaev on In Defence of Spock · 2021-04-23T09:34:54.771Z · LW · GW

Detailed categorizations of mental phenomena sounds useful. Is there a way for me to learn that without reading religious texts?

Comment by AVoropaev on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-30T08:53:19.834Z · LW · GW

How can you check proof of any interesting statement about real world using only math? The best you can do is check for mathematical mistakes.

Comment by AVoropaev on Extracting Money from Causal Decision Theorists · 2021-01-29T21:01:02.660Z · LW · GW

I assume you mean that I assume P(money in Bi | buyer chooses Bi )=0.25? Yes, I assume this, although really I assume that the seller's prediction is accurate with probability 0.75 and that she fills the boxes according to the specified procedure. From this, it then follows that P(money in Bi | buyer chooses Bi )=0.25.

Yes, you are right. Sorry.

Why would it be a logical contradiction? Do you think Newcomb's problem also requires a logical contradiction?

Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common knowledge that an hour later Seller sneaks a peek into this decision (with probability 0.75) or into a random false decision (0.25). After that Seller places money according to the decision he saw." seems similar enough and can probably be formalized into a model of this situation.

You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. I do that because it looks really suspicious to me for the following reasons:

  1. You don't use language developed by logicians to avoid mistakes and paradoxes in similar situations.
  2. Even for something written in more or less basic English, your paper doesn't seem to be rigorous enough for the kinds of problems it tries to tackle. For example, you don't specify what exactly is considered common knowledge, and that can probably be really important.
  3. You result looks similar to something you will try to prove as a stepping stone to proving that this whole situation with boxes is impossible. "It follows that in this situation two perfectly rational agents with the same information would make different deterministic decisions. Thus we arrived at contradiction and this situation is impossible." In your paper agents are rational in a different ways (I think), but it still looks similar enough for me to become suspicious.

So, while my previous attempts at finding error in your paper failed pathetically, I'm still suspicious, so I'll give it another shot.

When you argue that Buyer should buy one of the boxes, you assume that Buyer knows the probabilities that Seller assigned to Buyer's actions. Are those probabilities also a part of common knowledge? How is that possible? If you try to do the same in Newcomb's problem, you will get something like "Omniscient predictor predicts that player will pick the box A (with probability 1); player knows about that; player is free to pick between A and both boxes", which seem to be a paradox.

Comment by AVoropaev on Extracting Money from Causal Decision Theorists · 2021-01-29T09:13:05.148Z · LW · GW

I've skimmed over the beginning of your paper, and I think there might be several problems with it.
 

  1. I don't see where it is explicitly stated, but I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge. Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutely anything if your premise contains contradiction.
  2. A minor nitpick compared to the previous one, but you don't specify what you mean by "prediction is accurate with probability 0.75". What kinds of mistakes does seller make? For example, if buyer is going to buy the , then with probability 0.75 the prediction will be "". What about the 0.25? Will it be 0.125 for "none" and 0.125 for ""? Will it be 0.25 for "none" and 0 for ""? (And does buyer knows about that? What about seller knowing about buyer knowing...)

    When you write "$1−P (money in Bi | buyer chooses Bi ) · $3 = $1 − 0.25 · $3 = $0.25.", you assume that P(money in Bi | buyer chooses Bi )=0.75. That is, if buyer chooses the first box, seller can't possibly think that buyer will choose none of the boxes. And the same for the case of buyer choosing the second box. You can easily fix it by writing "$1−P (money in Bi | buyer chooses Bi ) · $3 >= $1 − 0.25 · $3 = $0.25" instead. It is possible that you make some other implicit assumptions about mistakes that seller can make, so you might want to check it.

     
Comment by AVoropaev on What's the big deal about Bayes' Theorem? · 2021-01-28T07:58:23.006Z · LW · GW

I've skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that's the first time in my life when I've found out that I need to know more math to understand non-mathematical text. The text is not about Bayes' Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that Vladimir_Nesov describes in his answer to my question. Some nice properties of the algorithm are proved, but not very rigorously. I don't know how to fix it, which is not very surprising, since I know very little about statistics. In fact, I am now half-convinced to take a course or something like that. Thank you for that.

As for the other part of your answer, it actually makes me even more confused. You are saying "using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers". To me it sounds similar to "using steel in life is more about understanding just how much whole can be greater than the sum of its parts than about actually making things from some metal". I mean, there is nothing inherently wrong with using a concept as a metaphor and/or inspiration. But it can sometimes cause miscommunication. And I am under impression that some people here (not only me) talk about Bayes' Theorem in a very literal sense.

Comment by AVoropaev on What's the big deal about Bayes' Theorem? · 2021-01-28T02:44:19.640Z · LW · GW

That's interesting. I've heard about probabilistic modal logics, but didn't know that not only logics are working towards statisticians, but also vice versa. Is there some book or videocourse accessible to mathematical undergraduates?

Comment by AVoropaev on What's the big deal about Bayes' Theorem? · 2021-01-28T02:06:15.872Z · LW · GW

This formula is not Bayes' Theorem, but it is a similar simple formula from probability theory, so I'm still interested in how you can use it in daily life.

Writing P(x|D) implies that x and D are the same kind of object (data about some physical process?) and there are probably a lot of subtle problems in defining hypothesis as a "set of things that happen if it is true" (especially if you want to have hypotheses that involve probabilities). 

Use of this formula allows you to update probabilities you prescribe to hypotheses, but it is not obvious that update will make them better. I mean, you obviously don't know real P(x)/P(y), so you'll input incorrect value and get incorrect answer. But it will sometimes be less incorrect. If this algorithm has some nice properties like "sequence of P(x)/P(y) you get repeating your experiment converges to the real P(x)/P(y) provided x and y are falsifiable by your experiment (or something like that)", then by using this algorithm you'll with high probability eventually update your algorithm. It would be nice to understand, for what kinds of x, y and D you should be at least 90% sure that your P(x)/P(y) will be more correct after a million of experiments.

I'm not implying that this algorithm doesn't work. More like it seems that proving that it works is beyond me. Mostly because statistics is one of the more glaring holes in my mathematical education. I hope that somebody has proved that it works at least in the cases you are likely to encounter in your daily life. Maybe it is even a well-known result.

Speaking of the daily life, can you tell me how people (and you specifically) actually apply this algorithm? How do you decide, in which situation it is worth to use it? How do you choose initial values of P(x) (e.g. it is hard for me to translate "x is probably true" into "I am 73% sure that x is true"). Are there some other important questions I should be asking about it?