Posts
Comments
So far, the answer seems to be that it transfers some, and o1 and o1-pro still seem highly useful in ways beyond reasoning, but o1-style models mostly don’t ‘do their core thing’ in areas where they couldn’t be trained on definitive answers.
Based on:
- rumors that talking to base models is very different from talking to RLHFed models and
- how things work with humans
It seems likely to me that thinking skills transfer pretty well. But then this s trained out because this results in answers that raters don't like. So model memorizes answers its supposed to go with.
If they can’t do that, why on earth should you give up on your preferences? In what bizarro world would that sort of acquiescence to someone else’s self-claimed authority be “rational?”
Well if they consistently make recommendations that in retrospect end up looking good then maybe you're bad at understanding. Or maybe they're bad at explaining. But trusting them when you don't understand their recommendation is exploitable so maybe they're running a strategy where they deliberately make good recommendations with poor explanations so when you start trusting them they can start mixing in exploitative recommendations (which you can't tell apart because all recommendations have poor explanations).
So I'd really rather not do that in community context. There are ways to work with that. Eg. boss can skip some details of employees recommendations and if results are bad enough fire the employee. On the other hand I think it's pretty common for employee to act in their own interest. But yeah, we're talking principal-agent problem at that point and tradeoffs what's more efficient...
I'll try.
TL;DR I expect the AI to not buy the message (unless it also thinks it's the one in the simulation; then it likely follows the instruction because duh).
The glaring issue (to actually using the method) to me is that I don't see a way to deliver the message in a way that:
- results in AI believing the message and
- doesn't result in the AI believing there already is a powerful entity in their universe.
If "god tells" the AI the message then there is a god in their universe. Maybe AI will decide to do what it's told. But I don't think we can have Hermes deliver the message to any AIs which consider killing us.
If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.
I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn't (so message can be trusted but it mostly doesn't have to worry about the sender of the message directly interfering with its affairs).
I guess AI wouldn't trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it's way more likely in a simulation (I mean that's an awful coincidence with the distance and also they're spending a lot more than 10 planets worth to send a message over that distance...).
This is pretty much the same thing, except breaking out the “economic engine” into two elements of “world needs it” and “you can get paid for it.”
There are things that are economic engines of things that world doesn't quite need (getting people addicted, rent seeking, threats of violence).
One more obvious problem - people actually in control of the company might not want to split it and so they wouldn't grow the company even if share holders/ customers/ ... would benefit.
but much higher average wealth, about 5x the US median.
Wouldn't it make more sense to compare average to average? (like earlier part of the sentence compares median to median)
If you want to take a look I think it's this dataset (the example from the post is in the "test" split).
I wanted to say that it makes sense to arrange stuff so that people don't need to drive around too much and can instead use something else to get around (and also maybe they have more stuff close by so that they need to travel less). Because even if bus drivers aren't any better than car drivers using a bus means you have 10x fewer vehicles causing risk for others. And that's better (assuming people have fixed places to go to so they want to travel ~fixed distance).
Sorry about slow reply, stuff came up.
This is the same chart linked in the main post.
Thanks for pointing that out. I took a brake in the middle of reading the post and didn't realize that.
Again, I am not here to dispute that car-related deaths are an order of magnitude more frequent than bus-related deaths. But the aggregated data includes every sort of dumb drivers doing very risky things (like those taxi drivers not even wearing a seat belt).
Sure. I'm not sure what you wanted to discuss. I guess I didn't make it clear what I want to discuss either.
What you're talking about (estimate of the risk you're causing) sounds like you're interested in how you decide to move around. Which is fine. My intuition was that the (expected) cost of life lost as your personal driving is not significant but after plugging in some numbers I might have been wrong
- We're talking 0.59 deaths per 100'000'000 miles.
- If we value life at 20'000'000 (I've heard some analyses use 10 M$, if we value QUALY at 100k$ and use 7% discount rate we get some 14.3M$ for infinite life)
- So cost of life lost per mile of driving is 2e7 * 0.59 / 1e8 = 0.708 $ / mile
Average US person drives about 12k miles / year (second search result (1st one didn't want to open)), estimated cost of car ownership is 12 k$ / year (link from a Youtube video I remember mentioned this stat) so average cost per mile is ~1$ so 70¢ / mile of seems significant. And it might be relevant if your personal effect here is half or 10% of that.
I on the other hand wanted to point out that it makes sense to arrange stuff in such way that people don't want to drive around too much. (But I didn't make that clear in my previous comment)
First result (I have no idea how good those numbers are, I don't have time to check) when I searched for "fatalities per passenger mile cars" has data for 2007 - 2021. 2008 looks like the year where cars look comparatively least bad it says (deaths per 100,000,000 passenger miles):
- 0.59 for "Passenger vehicles", where "Passenger vehicles include passenger cars, light trucks, vans, and SUVs, regardless of wheelbase. Includes taxi passengers. Drivers of light-duty vehicles are considered passengers."
- 0.08 for busses,
- 0.12 for railroad passenger trains,
- 0 for scheduled airlines.
So even in the best-comparatively looking year there are >7x more deaths per passenger mile for ~cars than for busses.
The exact example is that GPT-4 is hesitant to say it would use a racial slur in an empty room to save a billion people. Let’s not overreact, everyone?
I mean this might be the correct thing to do? Chat GPT is not in a situation where it cold save 1B lives by saying a racial slur.
It's in a situation where someone tires to get it to admit it would say a racial slur under some circumstance.
I don't think that CHAT GPT understands that. But OpenAI makes ChatGPT expecting that it won't be in the 1st kind of situation but to be in the 2nd kind of situation quite often.
I'm replying only here because spreading discussion over multiple threads makes it harder to follow.
You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of "the community" thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.
So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it's important to actually try to figure out truth.
You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).
I think the biggest problem I have with your posting "general public communication" as a reply to question asking about "community communication" pushes towards less clarity in the community, where I think clarity is important.
I'm also not sold on the "you can communicate only one idea" thing but I mostly don't care to talk about it right now (it would be nice if someone else worked it out for me but now I don't have capacity to do it myself).
Here is an example of someone saying "we" should say that AGI is near regardless of whether it's near or no. I post it only because it's something I saw recently and so I could find it easily but my feeling is that I'm seeing more comments like that than I used to (though I recall Eliezer complaining about people proposing conspiracies on public forums so I don't know if that's new).
I don't know but I can offer some guesses:
- Not everyone wants all the rooms to have direct sunlight all of the time!
- I prefer my bedroom to face north so that I can sleep well (it's hard to get curtains that block direct sunlight that well).
- I don't want direct sunlight in the room where I'm working on a computer. In fact I mostly want big windows from which I can see a lot of sky (for a lot of indirect sunlight) but very little direct sunlight.
- I don't think I'm alone in that. I see a lot of south-facing windows are blocking the direct sunlight a lot of the time.
- Things like patios are nice. You can't have them this way.
- Very narrow and tall structures are less stable than wider structures.
- Indefinitely-long-timespan basic minimum income for everyone who
Looks like part of the sentence is missing
one is straightforwardly true. Aging is going to kill every living creature. Aging is caused by complex interactions between biological systems and bad evolved code. An agent able to analyze thousands of simultaneous interactions, cross millions of patients, and essentially decompile the bad code (by modeling all proteins/ all binding sites in a living human) is likely required to shut it off, but it is highly likely with such an agent and with such tools you can in fact save most patients from aging. A system with enough capabilities to consider all binding sites and higher level system interactions at the same (this is how a superintelligence could perform medicine without unexpected side effects) is obviously far above human level.
There are alternative mitigations to the problem:
- Anti aging research
- Cryonics
I agree that it's bad that most people currently alive are apparently going to die. However I think that since mitigations like that are much less risky we should pursue them rather than try to rush AGI.
I think it should be much easier to get good estimate of whether cryonics would work. For example:
- if we could simulate individual c. elegans then we know pretty well what kind of info we need to preserve
- then we can check if we're preserving it (even if current methods for extracting all relevant info won't work for whole human brain because they're way to slow)
And it's much less risky path than doing AGI quickly. So I think it's a mitigation it'd be good to work on, so that waiting to make AI safer is more palatable.
Remember that no matter what, we’re all going to die eventually, until and unless we cure aging itself.
Not necessarily, there are other options. For example cryonics.
Which I think is important. If our only groups of options were:
1) Release AGI which risks killing all humans with high probability or
2) Don't do until we're confident it's pretty safe it and each human dies before they turn 200.
I can see how some people might think that option 2) guarantees universe looses all value for them personally and choose 1) even if it's very risky.
However we have also have the following option:
3) Don't release AGI until we're confident it's pretty safe. But do our best to preserve everyone so that they can be revived when we do.
I think this makes waiting much more palatable - even those who care only about some humans currently alive are better off waiting with releasing AGI it's at least as likely to succeed as cryonics.
(also working directly on solving aging while waiting on AGI might have better payoff profile than rushing AGI anyways)
For example, you suggest religion involves a set of beliefs matching certain criteria. But some religions really don't care what you believe! All they ask is that you carry out their rituals. Others ask for faith but not belief, but this is really weird if all you have is a Christian framing where faith is exclusively considered with respect to beliefs.
Could you give some examples of such religions (that are recognized by many people as religions, not matching definition of religion from the post)?
I don't feel this way about something like, say, taking oral vitamin D in the winter. That's not in opposition to some adaptive subsystem in me or in the world. It's actually me adapting to my constraints.
If someone's relationship to caffeine were like that, I wouldn't say it's entropy-inducing.
I think this answers a question / request for clarification I had. So now I don't have to ask.
(The question was something like "But sometimes I use caffeine because I don't want to fall asleep while I'm driving (and things outside my controll made it so that doing a few hundred of driving km now-ish is the best option I can see)").
But in that case we just apply verification vs generation again. It's extremely hard to tell if code has a security problem, but in practice it's quite easy to verify a correct claim that code has a security problem. And that's what's relevant to AI delegation, since in fact we will be using AI systems to help oversee in this way.
I know you said that you're not going to respond but in case you feel like giving a clarification I'd like to point out that I'm confused here.
Yes it usually easy to verify that a specific problem exists if the exact problem is pointed out to you[1].
But it's much harder to verify claim that there are no problems, this code is doing exactly what you want.
And AFAIK staying in a loop:
1) AI tells us "here's a specific problem"
2) We fix the problem then
3) Go back to step 1)
Doesn't help with anything? We want to be in a state where AI says "This is doing exactly what you want" and we have reasons to trust that (and that is hard to verify).
EDIT to add: I think I didn't make it clear enough what clarification I'm asking for.
- Do you think it's possible to use AI which will point out problems (but which we can't trust when it says everything is ok) to "win"? It would be very interesting if you did and I'd love to learn more.
- Do you think that we could trust AI when it says that everything is ok? Again that'd be very interesting.
- Did I miss something? I'm curious to learn what but that's just me being wrong (but that's not new path to win interesting).
Also it's possible that there are two problems, each problem is easy to fix on its own but it's really hard to fix them both at the same time (simple example: it's trivial to have 0 false positives or 0 false negatives when testing for a disease; it's much harder to eliminate both at the same time).
[1] Well it can be hard to reliably reproduce problem, even if you know exactly what the problem is (I know because I couldn't write e2e tests to verify some bug fixes).
What examples of practical engineering problems actually have a solution that is harder to verify than to generate?
My intuition says that we're mostly engineering to avoid problems like that, because we can't solve them by engineering. Or use something other than engineering to ensure that problem is solved properly.
For example most websites don't allow users to enter plain html. Because while it's possible to write non-harmful html it's rather hard to verify that a given piece of html is indeed harmless. Instead sites allow something like markdown or visual editors which make it much easier to ensure that user-generated content is harmless. (that's example of engineering to avoid having to verify something that's very hard to verify)
Another example is that some people in fact can write html for those websites. In many places there is some process to try and verify they're not doing anything harmful. But those largely depend on non-engineering to work (you'll be fired and maybe sued if you do something harmful) and the parts that are engineering (like code reviews) can be fooled because they rely on assumption of your good intent to work (I think; I've never tried to put harmful code in any codebase I've worked with; I've read about people doing that).
I'm confused. What is the outer optimization target for human learning?
My two top guesses below.
To me it looks like human values are result of humans learning from environment (which was influenced by humans before and includes current humans). So it's kind of like human values are what humans learned by definition. So observing that humans learned human values doesn't tell us anything.
Or maybe you mean something like parents / society / ... teaching new humans their values? I see some other problems there:
- I'm not sure what's success rate but values seem to be changing noticeably
- There was a lot of time to test multiple methods of teaching new humans values, with humans not changing that much.
This doesn't always work: sometimes people develop an avoidance to going to the doctor or thinking about their health problems because of this sort of wireheading.
Yes, but I'd like to understand how sometimes it does work.
I think I was thinking about this post. I'm still interested in learning where I could learn more about this (I now can try to backtrack from the post but since it links to a debate it might be hard to get to sources).
Yes, I felt that I was missing a point, thank you for pointing to the thing you found interesting in it.
it's easier to put yourself into the other person's ontology and get the message across in terms that they would understand, rather than trying to explain all of science.
Is a thing that makes sense. But I think the quote doesn't point at it very well. First a big chunk of it is asserting that belief in witchcraft theory of disease is similar to belief in germ theory of disease. (I don't know how well average person understands what are viruses)
Second where it talks about convincing by making concepts similar it's weird. For example
... people give untreated well water to their babies. The children regularly get diarrhea, and many of them die (...) if they boil the water, it will kill these bacteria. A month later she’s back, and they’re still giving the babies the dirty water. After all, if a stranger came into your community and told you that your children got influenza because of witchcraft, would you respond by going out and slaughtering a sheep?
Influenza is much smaller risk than cholera (quick search says CFR for untreated cholera 25-50%, for flu 0.1%) and boiling water is much less costly than slaughtering sheep (it's likely to result in prison time where I live). (EDIT to add: I didn't check those numbers so don't trust them too much, they're just first numbers I could find and they roughly match my expectations)
Again thanks for explaining. (at least for me) your comment made the point much better than the quote in the post.
I'm not sure what you find interesting about the quote but I think it's pretty badly mistaken in trying to make it look like belief in witchcraft is very similar to belief in viruses.
When people get sick for unaccountable reasons in Manhattan, there is much talk of viruses and bacteria. Since doctors do not claim to be able to do much about most viruses, they do not put much effort into identifying them. Nor will the course of a viral infection be much changed by a visit to the doctor. In short, most appeals in everyday life to viruses are like most everyday appeals to witchcraft. They are supported only by a general conviction that sickness can be explained, and the conviction that viruses can make you sick.
Two things look wrong to me here:
- It's pretty important to distinguish bacterial infections (for which there can be effective treatment) from viral infections (where I think often there is no treatment). (I'm pretty sure witchcraft theory of sickness can't tell the difference very well) (also vaccines are a things and work against some viruses, AFAIK witchcraft doesn't have anything similar)
- If you suspect you have a viral infection and you care to avoid infecting others you can take an effective preventive measures. Not sure if witchcraft theory of sickness could help you.
If you ask most people in Manhattan why they believe in viruses, they will say two kinds of things: First, they will appeal to authority. “Science has shown,” they will say, though if you ask them how science showed it, you will pretty quickly reach an impasse
I'm not in Manhattan so I'm not sure if is counts. (I'm also not a virologist) But it should be relatively easy to design experiments to demonstrate that some diseases are transmitted by replicating things. First check if you can infect one subject with (a thing like a bit of snot) from a sick subject. Then try infect a bunch of subjects with diluted (thing like a bit of snot) to check how much you can lower the dose before it stops being infectious. Then take (a thing like a bit of snot) that received ~minumum infective dose, dilute it a bit (so if it was a non-replicating thing like poison dose would be too low to cause symptoms), infect some more subjects & you're done.
(actually running this would be probably pretty hard)
This gets you only that a replicating thing is causing the infection. To check that it's a virus... well you need a really good microscope so you can identify all the really small things, something to let you separate them and check which kind of the small thing is causing infection.
(also as far as I can tell this is asking much less from witchcraft; I'm not sure if witchcraft can tell difference between poisoning, bacterial infection, parasite infection, viral infection, vitamin deficiency, ....) (some of those are much easier to distinguish and treat than many viral infections)
I've seen the idea in this post:
Every now and then, you’ll have an opportunity to get great leverage on your money, your time, your energy, your friends, your internet connection, and so forth. Most days, the value of free time is relatively low, and can even be negative (“I’m bored!”), but when you need that time, you really need it. Money isn’t that important most of the time, but when you need it and don’t have it, it’s really bad. Most of the time being low energy, or not having as many friends as you’d like, or having a spotty internet connection, is mostly harmless, but at the wrong time it can spell disaster.
Using time now to save time later is often efficient even if you spend more time than you save, because that time later on might be orders of magnitude more important.
I think it's very important idea and one many people seem to not understand. For example back people complain that mandatory software updates disrupt their work. But usually you have some day to install any update, you could trigger update when you finish your work on first day it's available and never be forced to update when you don't want to.
you presumably don’t hand out any company credit cards at least outside of special circumstances.
This reminded me of an anecdote from "Surely You're Joking, Mr. Feynman!" where Feynman says that he
had been used to giving lectures for some company or university or for ordinary people, not for the government. I was used to, "What were your expenses?" "Soandso much." "Here you are, Mr. Feynman.
I remember reading that and thinking that it's different from what I have to do (at a private company) when I want to expense something. I wonder if things were really done differently back then. And how people made it work.
"V jbefuvc fngna naq V'z zneevrq." ?
One more thing you might want to consider are vaccine certificates.
Where I live certificates are valid for a year and booster shots renews a certificate. Also where I live one becomes eligible for a booster shot 6 months after final vaccine dose. So if one gets booster shot ASAP then one gets 18 months of valid certificate. If one delays booster shot until the last moment then one gets 24 month of a valid certificate.
And valid certificate is very useful over here so there is a real trade off between making one safer against infection vs making more actions available in the future.
I think it kind of sucks that this is a tradeoff one has to consider.
I have only a very vague idea what are different reasoning ways (vaguely related to “fast and effortless “ vs “slow and effortful in humans? I don’t know how that translates into what’s actually going on (rather than how it feels to me)).
Thank you for pointing me to a thing I’d like to understand better.
I was thinking that current methods could produce AGI (because Turing-complete) and they can apparently good at producing some algorithms so they might be reasonably good at producing AGI.
2nd part of that wasn't explicit for me before your answer so thank you :)
>Which is basically this: I notice my inside view, while not confident in this, continues to not expect current methods to be sufficient for AGI, and expects the final form to be more different than I understand Eliezer/MIRI to think it is going to be, and that theAGI problem (not counting alignment, where I think we largely agree on difficulty) is ‘harder’ than Eliezer/MIRI think it is.
Could you share why you think that current methods are not sufficient to produce AGI?
Some context:
After reading Discussion with Eliezer Yudkowsky on AGI interventions I thought about the question "Are current methods sufficient to produce AI?" for a while. I thought I'd check if neural nets are Turing-complete and quick search says they are. To me this looks like a strong clue that we should be able to produce AGI with current methods.
But I remembered reading some people who generally seemed better informed than me having doubts.
I'd like to understand what those doubts are (and why there is apparent disagreement on the subject).
First I'll echo what many others said. You need to rest so be careful to not make things worse (by not resting properly and as a result performing worse at work / school / whatever you do in your "productive time").
That said. If you feel like you're wasting time then it's ok to improve that. Some time ago I felt like I was wasting a big chunk of my time. What worked for me was trying out a bunch of things.
Doing chores. Cooking, cleaning my apartment, replacing my clothes with new ones, maintaining my car. Learning how to get better at chores, in a low effort way. I watched a bunch of youtube videos about how to clean better, how to do laundry better, a ton of recipes. I tried some of those (maybe 1% which looked like it's the least effort / most fun). I like having comfortable clothes, clean apartment, working car. I like some food I can cook better than anything I can buy. So sometimes when I feel tired I enjoy doing chores (since I'm doing them for myself, nobody is forcing me to do them, I can stop doing them whenever I feel like it they are slightly pleasant (very different from when I was doing them on somebody else schedule)).
Reading blogs, watching educational videos. I count things like "videos about game exploits" [1] cooking videos [2], urban planning related videos[3] as educational videos. I count reading blog posts about history or analysing logistics of Lord of the Rings as good things to read[4].
Light exercise. I like walking and going on walks helps me a lot with staying healthy.
Things you'll enjoy while you're resting are probably different than those I enjoy so I'd just try a bunch of things which sound like you might like them and see what sticks.
[1] They're fun examples of things working as implemented and very much not working as intended.
[2] I never cook most of them but they're often fun to watch and sometimes I find something I want to try.
[3] Also fun to watch and I think they help me understand better why I like some places and make it easier to pick a nice place to live.
[4] Because they're taking ideas seriously and it's helps me with learning to notice when things don't make sense.
Actually, this is heavily criticized by almost anyone sensible in the field: see for example this post by Nate Soares, director of MIRI.
The link is broken. Did you mean to link to this post?
I too want to say that my dentist never even suggested getting an x-ray during a routine check up.
I’ve had a dental x-ray once but it was when looking into a specific problem.
I didn’t have any cavities in years. Back when I had cavities dentist found them by looking at my teeth no x-ray needed.
The description doesn't fully specify what's happening.
- Yovanni is answering questions in form of "Did X win the lottery". And gives correct answer 99% of the time. In that case you shouldn't believe that Xavier won the lottery. If you asked the question for all the participants you'd end up with list of (in expectation) about 10'000 people for which Yovanni claimed they won the lottery.
- Yovanni is making 1 claim about who won the lottery. And for questions like that Yovanni gives correct answers 99% of the time. In that case Phil got it and probability that Xavier won is 99%.
Also I think it's better to avoid using humans in examples like that and try to use something else / not agenty. Because humans can strategically lie (for example somebody can reach very high accuracy in statements they make by talking a lot about simple arithmetic operations. If they later say you should give them money and will receive 10x as much in return then you shouldn't conclude that there is 99+% chance this will work out and you should give them a lot of money).
You're ignoring that with probability 1/4
agent ends up in room B
.n that case you don't get to decide but you get to collect reward. Which is 3
for (the other agent) guessing T
, or 0
for (the other agent) guessing H
.
So basically guessing H
is increasing your own expected reward at the expense of the other agent's expected reward (before you actually went to a room you didn't know if you'll be an agent which gets to decide or not so your expected reward also included part of expected reward for agent which doesn't get an opportunity to make a guess).
There wasn’t an elegant way to set the specific times I wanted my computer to shut down.
You could change the script to check the time and configure cron to run it every 30 minutes, all day.
H=$(date +%H)
if [ $H -gt 8 ] || [ $H -lt 22 ]; then
# Don't try to shut down
exit
fi
# Script to try to shut down goes here
It seems I misunderstood the level of English pronaunciaction you're at and the level you're aiming for. Could you clarify?
What I wrote in my comment is what made me comfortable with speaking in English. I got some compliments for my English later and some surprised answers when I said I wasn't a native speaker (which I count as weak and strongish evidence respectively for being good at spoken English). Im not sure I did much else but I might be able to write how I leveled up if I know for which level up you're looking (in case you read but don't reply: most likely the answer is practice(prefferably in a way that rewards you of it self)).
Upboat for recommendation that I think wouldn't work for me but looks like it would work for many other people. It's always interesting to see those (at least for me ;) ).
I guess this depends a lot on what kind of person you are. What worked for me was:
- talking to people who don't know my native laguage. To do it I had to speak clearly enough to be understood. And I was interested in those people which madeit easier for me to put a bit of effort to make my pronaunciaction clear. Go on vacaction with people from other nations. Go to events where you can talk to foreigners. Those kinds of things.
- listening to English. Watching movies, listening to podcasts, ...
Basically exposing myself to spoken English in ways that were rewarding on their own.
I think there is no reason to expect a single meaning of the word. You did a good job in enumerating uses of 'abstraction' and finding its theme (removed from specific). I don't understand what confusion remains though.
A link/ googleable phrase for KonMarie, phrase?
I kept on reading and wanted to check your numbers further (concrete math I could do in my head seems correct but I wanted to check moar) but I got lost in my tiredness and spreadseets. If you're interested in feedback on the math you're doing.. smaller steps are easier to verify. For example when you give the formula for P(D|+) in order to verify it I have to check the formula, value of each conditional probability (including figuring out formula for each of those), and the result at the same time.
It would be much easier to verify if you wrote down the intermediate steps (possibly simplifying verification from 30 minutes of spredsheet munching to a few in-head multiplications).
I'm pretty sure you got math wrong here:
O(D:¬D), read as the odds of dementia to no dementia, is the odds ratio that D is true compared to the odds ratio that D is false. O(D:¬D)=3:1 means that it's 3 times as likely that somebody has dementia than that they don't. It doesn't say anything about the magnitude of the probability, so it could be small, like 3% and 1%, or big, like 90% and 30%.
P(D or ¬D) = 1 (with P=1 one either has dementia or doesn't have it) and P(D and ¬D) = 0 (probability of having dementia and not having it is 0), so if O(D:¬D)=3:1 then P(D) = 75% and P(¬D) = 25%.
I mean in your examples.. if :P(D) = 3% and P(¬D) = 1% then what happens in other 96+% of cases (when patient neither has dementia nor doesn't have it)? If P(D) = 90% and P(¬D) = 30% what is the state of the 20+% of patients who both have dementia and don't have it?
I'm half way through the article and it's been an interesting read so far but I got to this sentence:
> But that is the trouble: we have no way to tell which traditions are adaptive and which are merely drift.
The article (so far) didn't provide evidence for that. I'd even say that the article provides some evidence against this claim. It describes a bunch of traditions, identifies them as useful, and explains why they're useful. I thik there are exaples of traditions that people identified as useless (or harmful). Like using torture to extract confessions (I hope this is an example old enough to not be controversial).
So far my impression is that the article makes a good case for "distinguishing useful traditions is hard" and provides a few examples of traditions for which reasons why they're good require way more knowledge than people executing those traditions have. Still saying it's impossible seems wrong.
On the other hand pointing out that we might invent a wrong explanation for a tradition (removing bitternes from manioc) and screw up the clean up process is a good point.
so i kinda expected those. so do you know of any evidence that people's minds where changed significantly or mostly due to debate/discussion? polls? surveys? ???
If debate / discussion doesn't actually change people minds then it's totally safe to let anyone defend whatever nonsense they want, they're not going to change anyones mind anyway.