Posts
Comments
No problem. Looks like that will be the soonest I'll be able to make it as well.
Looks like SSC meetups are still ongoing: https://www.lesswrong.com/events/ifsZbNmHwxhCm7F4n/slate-star-codex-meetup?commentId=ZccQoDDQY2skHDsAY#ZccQoDDQY2skHDsAY
This whole time? Man, I haven't been looking hard enough. What's the algorithm, 2d Saturdays at 1900?
Ahh, I think I did not think through what "rationality enhancement" might mean; perhaps my own recent search and the AI context of Yudkowsky's original intent skewed me a little. I was thinking of something like "understanding and applying concepts of rationality" in a way that might include "anticipating misaligned AI" or "anticipating AI-human feedback responses".
I like the way you've framed what's probably the useful question. I'll need to think about that a bit more.
Cool, thanks for sharing.
I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.
Cool, that sounds like a pretty useful combination.
I'd love to. The soonest I'd be available in August would be at the end of the month. I'm sure we can find somewhere public that would work. What will you be studying?
A few observations.
First, it seems likely that the increase in positivity can be explained by fewer precautionary tests: fewer people are getting tested "just to be sure", fewer people are being required by work/travel/etc. to get tested. Therefore fewer negative tests.
Second, it seems likely to me that the "93%, 93%, 91%" numbers are calculated independently from each other. I.e. 93% less likely to contract than unvaccinated, 93% less likely to hospitalize than unvaccinated, and the vaccinated group was 91% less likely to die than the unvaccinated group. So with alpha, all probabilities were reduced ~uniformly. Now consider a variant (delta) where the vaccine is not as effective at reducing symptoms of any level, but is still ~as effective at preventing hospitalizations and deaths. This would decrease the likelihood of the vaccine preventing a positive test or symptoms, while not changing the hospitalization/deaths numbers much. This makes sense in my head, but perhaps there's something I'm missing?
Finally, a typo that tripped me up a bit:
We should also look at case counts in Israel. On June 18 they had 1.92 cases per million, right before things started rising, on June 14 it was 65.09, for R0 = 1.97. From previous data, we can presume that when Delta was a very small portion of Israeli cases, the control system adjusted things to something like R0 = 1, so we’ll keep that number in mind.
The second "June" should be "July", as in "July 14". (Small nitpick, I know, but it took me a minute to work out, so I figured I'd share.)
I've started formalizing my research proposal, so I now have:
I intend to use computational game theory, system modeling, cognitive science, causal inference, and operations research methods to explore the ways in which AI systems can produce unintended consequences and develop better methods to anticipate outer alignment failures.
Can anyone point me to existing university research along these lines? I've made some progress after finding this thread, and I'm now planning to contact FHI about their Research Scholar's Programme, but I'm still finding it a little time-consuming to try to match specific ongoing research with a given University or professor, so if anyone can point me to any other university programs (or professors to contact) which would fit well with my interests, that would be super helpful.
Wait, isn't that an example of efficiency of scale being dependent on investment? You have to get a 1-foot rope and scissors, but once you have, you can create two 1/2 foot ropes? I think the "given a 1-foot rope" is doing more work than you realize, because when I try to apply your example to the world above, I keep getting hung up on "but in the imaginary world above, when we account for economy of scale, if you just needed one 1/2 foot rope, you would just create a 1/2 foot rope, and that would take you 1/2 the time as creating 1 foot of rope." And for The David, I feel like "sure, but that doesn't explain why someone wouldn't just carve their own David if they wanted one". I think I'm bypassing some of the issue here, but I'm not entirely sure what it is.
It does, however, bring up another interesting reason for trade (and this may be part of how investment can be independent from efficiency of scale): shared resources. If a pair of scissors does not scale according to how often I use them, and I only use them once per day, I can increase efficiency/decrease required investment by trading their use so others can use them when I'm not. This applies to the the David as such: utility gained from the David is not zero sum, multiple people can utility from it without decreasing the utility the others gain; therefore it does not make sense for everyone to carve their own. So any time a resource or product produces non-zero sum benefits if it exists, we have a reason for it's use to be traded/trade to be involved in sharing it.
Applying this, if 5 people each carve a statue and put them in a sculpture garden in exchange for access to the garden, they can each enjoy five statues (alternatively, they could collaborate to build the statue in 1/5th the time and share in the enjoyment of it).
Not sure this is what you were getting at, but I think I've talked myself into thinking that when investment has independence from efficiency of scale it's because of the non-zero sum nature of some shared resources.
Hmm. I feel like it's relevant that your example relies on trade, which we're trying to eliminate. Therefore, if all of the other reasons for trade go away, this example would be irrelevant.
But can we recreate it elsewhere? Perhaps there is some task which is time sensitive, but cannot be done by one person (in their remaining marginal time) at a speed which does not decrease marginal gains. Information sharing comes to mind, but that seems to have already been accomplished by the society outlined above.
Yeah, I think we’re in agreement. I can’t think why there would ever be a minimum, except to exceed the break-even point on fixed costs.
Any chance this will be resuming any time soon?
Some interesting responses here, and although I didn't read through all of them, I read enough to get a sense of the kind of approach most people seem to be taking here.
As someone who was where you are now about five years ago, I will share the way I think about it, especially since it seems quite distinct from the approach most people are taking here.
Short answer (and hot take for this crowd): it's not. The kind of morality I believed in as a Christian (an objective truth about things being Right and Wrong) is not possible without a god.
The illusion of such a world, however, is very possible, and in fact predicted by some pretty prominent evolutionary psychology theories of behavior. If you have not read The Selfish Gene, I highly recommend it as Dawkins' treatment of this issue is the best I've heard and the (I'm pretty sure) origin of every other good explanation I've heard from elsewhere.
In essence, the illusion of a world with an objective moral reality is the evolutionary response to the cooperation problem associated with repeated games where actors have the ability to hold a grudge: for any single game, the optimal strategy is to pursue the course which grants the maximum individual reward (the defect strategy in the prisoner's dilemma), but with repeated games in a population with the ability to hold a grudge, this strategy is out-competed by a "tit-for-tat with initial cooperation" strategy. Therefore, a person who is likely to cooperate with others trying to work toward the optimal group strategy, at least until betrayed, will outcompete someone who looks out only for himself. The tendency manifests as a general feeling of "the right way to do things" was the easiest evolutionary pathway toward achieving this tendency.
But why not have a sense of "pretend to cooperate until no one is looking, then do what's best for yourself"? Well for one, because then you wouldn't be righteously indignant and impassioned when you caught someone else following this strategy (important for the tit-for-tat part), but also because pretending involves lying. If the evolved strategy is to lie, then an expected co-evolution would be the ability to detect lies, a feedback loop would then result until a solution is developed so that a person can lie without realizing it himself so that he doesn't give himself away. This is in fact what we find when people passionately defend their behavior that to everyone else is blatant hypocrisy: they are self-deceived and therefore don't realize the inconsistency (this is also a large part of many of the fallacies discussed in the sequences).
Let's test this against your (and some of the others posted here) example: murder. What we consider to be "murder" is usually undeserved killing, usually to benefit oneself.
Does this improve the outcome for an individual game? Yes, you get to take what he has.
But what about repeated games, where other players can hold a grudge? No, the other villagers will gang up on you when they see what you have done. And when the other villagers execute you as a group, this is "justice", not "murder". Why? Because it solves the cooperation problem by disincentivizing potential murderers. (Incidentally, this is why it's so easy to come up with ethical dilemmas involving killing; because we pit two competing psychological solutions against each other: "don't kill" vs. "justice".)
How else to test this? Go through the commands the Bible, and do your best to answer "would I feel this way if I hadn't read this?" I predict that >90% of the ones for which you say "yes" can be shown to solve a cooperation problem found in the ancestral environment. (With lesser confidence, I predict that >50% of the ones for which you said "no" can be shown to have solved a cooperation problem found at the time of its writing.)
In retrospect, the alignment of psychology to the ancestral environment that the sequences demonstrated was one of the arguments which most strongly (downwardly) updated my belief in God. Why does the killing of a pre-pubescent seem so much worse than someone older? because the older person is a competitor, rather than a descendant/kin. Why does abortion seem so much worse the older the baby gets? because it is becoming increasingly viable. Why am I more emotionally motivated by the fate of those close to me than the fate of an entire neighboring city? because increased relatedness means more shared genes.
One final note: from a purely practical perspective, consider how much utility you are currently gaining from your beliefs. It may be too late to just choose to not pursue this to it's conclusion (it was for me), but consider the possibility that if you're wrong, knowing doesn't actually improve your utility. It was world-shattering for me to change my mind on this, and I honestly don't know what I would do if I had an "unknow" button.
Looking through the comments, it seems like most of my thoughts have been captured (economy of scale, collaboration producing non-linear accumulation, etc) But some of the others (risk management, the time axis of logistics) helped me come up with a new one: perishability. When we combine some of the other factors (especially risk) people will at times have a perishable surplus. At these times they would seek to convert this surplus to something non-perishable or some other thing that they need at the time. If we had a society as described above plus uniform starting conditions and everyone used the same dice-rolls (i.e. everyone had the same good/bad corn year), I believe this reason for trade would cease to exist
.
I agree with both, but claim that they are, in a sense, the same problem: if you solve the economy of scale issue, along with the parameters above, people would simply produce the amount desired with no diminishing marginal return problem on consumption.
Isn't 2 just a product of 1? If 1 were not true, couldn't you just get started at small scale? This may be understood, but if not, it seems useful to point out the entanglement.
Also, another aspect of the insurance is spoilage: some goods preserve better than others, so it makes sense to convert excess into something stable so that you can "self insure".
Unfortunately, a car is an unavoidable cost for me, I expect that is a large part of the difference.
I do have a car, but I don't even live in the bay area and didn't realize how many of you were in Berkeley. Makes sense now.
I think I underestimated how much of the Rationalist community was in the bay area. That fact alone resolves most of my confusion, thank you.
But most people's miles are in Uber/Lyft
This is interesting to me, as every time I've looked at Uber/Lyft prices in my area it has seemed a bit high for it to be my go-to option. Can you link me to a good discussion regarding why this is the typical Rationalist choice? (I've read a lot of the sequences, etc. but really don't spend hardly any time on the blog itself)
Nevermind, I think I've mostly figured it out: by arbitraging, I'm effectively borrowing against my various positions, so once the question is resolved, those debts must be paid before I can get the difference.
Another question: what is the more general rule in trying to figure out when this can be done? By my math, 14.88 is about 93% of the 16 options, or 99% of 15 options. This leads me to believe that the more general rule is .99*(n-1) where n is the number of options, which would make sense, since you will not get paid for one of your positions. Is this roughly correct?
As someone new to Predictit, I can't help but feeling I'm misunderstanding how predictit pays out. Now that I have completed this, and have ~859 shares of each "No", will I not get payed $1 for each share of "No" which turns out to in fact be "No"? Based on what everyone is saying here, this seems highly unlikely, but I can't figure out how it works otherwise.
Quick update: I came up with a game to use as an icebreaker. And I'd love ideas for future variations. It's a combination of Credence Calibration, 20 Questions, and Taboo. The children are trying to determine which of three possible states exist on the card which I have face down (for my first iteration, the possibilities will be "Cat", "Rat", and "Dog"). Every kid gets 30 poker chips to allocate to each of the three possibilities. Kids will then take turns asking a yes or no question, but before each Q, I roll a six sided die. If it comes up six, all chips placed on a wrong answer are turned in, otherwise, they ask their question, I answer with something on a scale of "Never" to "Always", and they are permitted to reallocate their chips. But there is a catch: they are not permitted to use certain words (i.e. cat, dog, rat, meow, bark, pet, etc.) in their questions.
The point is to find tests which can serve as evidence between the possibilities and recognize how confidence should change according to evidence.
Would be interested in other possible states for future iterations.
Wait a minute, are you Randall Munroe or do you just like the website so much that you adopted the name for your handle? If so, I'm flattered, I love your website.
I like the coin flip idea. I have done something along these lines as a single session with homeschool kids where I gave them two decks of cards and had them stack the deck while I was out. When I came back I used an Excel VBA program I had made to continually reassess the maximum likelihood for the red/black proportion and updated it as I drew cards. Didn't go quite as well as I had hoped, mostly because I didn't emphasize that in order to get quick results they needed to really stack the deck, and they had made it 24 red, 28 black, or something similar.
Anyway, yes, I was thinking exploring probability might have some more possibilities along these lines, so I will think about that a little more. We did optical illusions today: persistence of vision, pattern juxtaposition, etc. Then we talked about how they fool system 1 thought, but you can use system 2 techniques to defeat them, did things like measuring the apparently converging lines, slowed down the thaumatrope, etc.
Yes, I agree that doing good science is hard with flash, I've just had everyone telling me that that's what hooks them. Good to know that's not really true.
I'm thinking along the lines heavily leading to/giving the model, not necessarily having them come up with it themselves and then testing it. But part of the reason I'm asking here is to see if anyone has ideas regarding models which are discoverable by kids this age so that they can get there by more of their own processes.
Yes, I suppose I could have been more specific about the number of kids. I will be teaching my own two at a minimum, but could have as many as seven others join.
Thanks for the note about the handbook, I'll check it out.
I like these ideas, and you're right that these KISS type questions are good at getting at the heart of mechanisms and generalizing outside of context.
I'll mention now though, that I've been rightly advised to not disregard the flashy stuff kids like to see, because it is effective at getting them excited about science. Do you have any specific recommendations on how to take some of the classic "experiments for kids!" stuff you can find with a google search and add in a dose of "construct a falsifiable model and attempt to falsify it"? Some way I can keep the flash, but still teach them to the importance of models which allow them to make bold predictions?
Thanks for the link! It gave me his email address, I agree about the Inflection Point curriculum, the task will be to convert it to elementary level.
Found it thanks to the website posted below. duncan@rationality.org
How would I contact him?
Well, my first thought is that I need to spend some actual time on this site (I had to look up most everything you mentioned); Most of my education has simply come from Yudkowsky's book/compilation.
Zendo definitely looks promising, and should definitely be an element of the course as well as something I play with my kids. As I envision the course, however, it would be an element such as a warm up or cash out, not the core curriculum.
My thoughts on Credence Calibration are similar to my thoughts on Zendo with the following modifications: each kid would be given ten poker chips, we would play the 2 statement variant (at least initially), scoring would be simplified to the liar keeping all poker chips bet on his lie, winner would be the one with the most poker chips at the end.
Focusing and Internal Double Crux seem like they would be pretty hard to teach to elementary age children. Focusing mostly because it seems like it would require one-on-one instruction, at least initially.
Unfortunately, I do not have much instrumentation. I could buy inexpensive things, so a thermometer and a humidity sensor would be doable, but it seems like a worthwhile a CO2 sensor might be a little more (based on my brief look on Amazon). I really do like the idea of the experiment though.
Some experimentation ideas I have received: investigate air pressure changes created by shower water spray as measured by a shower curtain (also blowing over pieces of paper, blow dryer, etc.) and electricity produced by a lemon battery, water drawn through celery including dyed water, spectrum differences in light sources as shown by a prism.
I took a look; looks pretty cool and I will definitely get this to play with my kids. Not sure it's quite what I want to build a curriculum around though.
C.S. Lewis addressed the issue of faith in Mere Christianity as follows:
In one sense Faith means simply Belief—accepting or regarding as true the doctrines of Christianity. That is fairly simple. But what does puzzle people—at least it used to puzzle me—is the fact that Christians regard faith in this sense as a virtue, I used to ask how on earth it can be a virtue—what is there moral or immoral about believing or not believing a set of statements? Obviously, I used to say, a sane man accepts or rejects any statement, not because he wants or does not want to, but because the evidence seems to him good or bad. Well, I think I still take that view. But what I did not see then— and a good many people do not see still—was this. I was assuming that if the human mind once accepts a thing as true it will automatically go on regarding it as true, until some real reason for reconsidering it turns up. In fact, I was assuming that the human mind is completely ruled by reason. But that is not so. For example, my reason is perfectly convinced by good evidence that anaesthetics do not smother me and that properly trained surgeons do not start operating until I am unconscious. But that does not alter the fact that when they have me down on the table and clap their horrible mask over my face, a mere childish panic begins inside me. In other words, I lose my faith in anaesthetics. It is not reason that is taking away my faith: on the contrary, my faith is based on reason. It is my imagination and emotions. The battle is between faith and reason on one side and emotion and imagination on the other. When you think of it you will see lots of instances of this. A man knows, on perfectly good evidence, that a pretty girl of his acquaintance is a liar and cannot keep a secret and ought not to be trusted; but when he finds himself with her his mind loses its faith in that bit of knowledge and he starts thinking, “Perhaps she’ll be different this time,” and once more makes a fool of himself and tells her something he ought not to have told her. His senses and emotions have destroyed his faith in what he really knows to be true. Or take a boy learning to swim. His reason knows perfectly well that an unsupported human body will not necessarily sink in water: he has seen dozens of people float and swim. But the whole question is whether he will be able to go on believing this when the instructor takes away his hand and leaves him unsupported in the water—or whether he will suddenly cease to believe it and get in a fright and go down. Now just the same thing happens about Christianity. I am not asking anyone to accept Christianity if his best reasoning tells him that the weight of the evidence is against it. That is not the point at which Faith comes in. Faith, in the sense in which I am here using the word, is the art of holding on to things your reason has once accepted, in spite of your changing moods.
Although many religious people use the word differently, this is how I use Faith, and I propose that it would be an acceptable one to facilitate this discussion: a determination to hold on to what you have already established a high confidence level in, despite signals you may have received from less rational sources (i.e. emotions).