Posts
Comments
Is there a way to do this without needing to secure collateral for the refund, using some stable investment vehicle like a CD? "The earlier you pledge, the bigger refund you get if the contract isn't fully funded" might help avoid the "waiting until the last moment" issue, but maybe there's some perverse incentive or other blocker.
I'm also very curious about how this method could solve issues with funding of scientific research. The lack of market pricing for research is a major impediment to allocating public funds effectively. But what prediction market can accurately estimate the price of something that might not pay off for 100 years?
The Hill has published some more information:
The state health department identified 469 COVID-19 cases among Massachusetts residents who went to Provincetown, a popular vacation destination in Barnstable County, in the month of July, including 346 fully vaccinated people.
Some 127 COVID-19 samples from the fully vaccinated, including recipients of all three U.S.-authorized vaccines, showed a similar viral load to the samples from the 84 unvaccinated people.
The report noted that microbiological studies are needed to confirm that similarity in the viral load to determine whether fully vaccinated people can transmit the virus.
I still have the impression that this data could be systematically biased: it makes sense that the viral load would be high among identified cases, but randomized testing of the broader population is needed to understand the base rates.
The CDC's claim that vaccinated people have similar viral loads from Delta as unvaccinated people is now spreading far and wide on social media. The Washington Post obtained their internal slide deck here, with the unpublished data supporting this claim on slide 17.
Does anyone understand how to square this with various other studies from the past few months with more positive results for vaccine efficacy, serum neutralization, etc.? Or even better, does anyone have the actual source for this data? To me, this claim seems too extreme to be likely, but even my many PhD scientist friends mostly seem to be accepting this completely uncritically.
"Twiki" is already the name of a wiki-related product (https://twiki.org/), so that might be confusing.
There was a correlation if she plotted the high-traffic times to the incidents … No. This was wrong. She was looking at it the wrong way. They didn’t just need to look at when things had happened. They needed to look at all the times Medina had seen similar conditions—high traffic, large-mass ships, mistuned reactors—and nothing had gone wrong.
– Naomi Nagata in "Babylon's Ashes" by James S. A. Corey
A few brief supplements to your introduction:
The source of the generated image is no longer mysterious: Inceptionism: Going Deeper into Neural Networks
But though the above is quite fascinating and impressive, we should also keep in mind the bizarre false positives that a person can generate: Images that fool computer vision raise security concerns
Zvi is their CEO.
I find their site on the wayback machine as recently as March 22, 2015. OP could also try PMing user:Zvi.
My view of nutrition is basically option 2. "Nutrition science" as it exists today seems to be primarily an attempt to study subtle, complex effects using small, poorly-controlled samples. There are basic facts about nutrients that are fairly well supported, but I have never become convinced of the superiority of any "diet" based on the supposed evidence for it.
That order is based on the increasing size of the sets of possible values, of course.
Here is a suggestion that I haven't seen yet. I don't think it constitutes a full plan by itself, but it fits the form of an AI box experiment with Harry as the AI.
Harry and Voldemort's discussion about testing his horcrux 2.0 spell by offering immortality to one of his friends (read: minions, in his case) revealed a weakness, that Voldemort is heavily biased against certain ways of thinking. Harry should remind him of this in the context of the Patronus 2.0 spell. The fact that Harry was able to discover a new (and incredibly powerful, as we have seen) form of magic simply by having the right mindset may indicate that certain mindsets are key to discovering deeper secrets of magic as a whole. (I'm envisioning here, as may or may not be canon, magic as an API for tapping into the power of Atlantis.) Voldemort has a known interest in the deeper secrets of magic, and for this reason he should keep Harry alive, or risk losing access to mindsets he currently can't fathom.
Even Kepler's theory expressed as his three separate laws is much simpler than a theory with dozens of epicycle.
Kepler's heliocentric theory is a direct result of Newtonian mechanics and gravitation, equations which can be encoded very simply and require few parameters to achieve accurate predictions for the planetary orbits. Copernicus' theory improved over Ptolemy's geocentric theory by using the same basic model for all the planetary orbits (instead of a different model for each) and naturally handling the appearance of retrograde motion. However, it still required numerous epicycles in order to make accurate predictions, because Copernicus constrained the theory to use only perfect circular motion. Allowing elliptical motion would have made the basic model slightly more complex, but would have drastically reduced the amount of necessary parameters and corrections. That's exactly the tradeoff described by MML.
The linked Wikipedia page provides a succinct derivation from Shannon and Bayes' Theorem.
You can change the comment sort to "new" instead of "top", below the tags at the bottom of the original post.
Among theories that explain the evidence equally well, those with fewer postulates are more probable. This is a strict conclusion of information theory. Further, we can trade explanatory power for theoretical complexity in a well-defined way: minimum message length. Occam's Razor is not just "a convenient heuristic."
I wish I had taken more statistics courses. I learned the basics and have picked up a fair amount of the advanced stuff through self-study during graduate school, but I didn't realize during college how useful it would be.
I wish more people would take more computer science courses. Intro to Comp Sci is usually too basic to be useful. Data structures, algorithms, numerical/scientific computing are all useful in a large variety of careers.
I defended my dissertation earlier this month, earning a PhD in experimental high energy physics in just over 3 years. In January, I'll be moving on to a postdoctoral research position at a national laboratory.
I've been catching up on Person of Interest. The first season is kind of a slog, but it gets much better after that. The most recent episode explicitly discusses AI Friendliness and AI-box problems.
the provisions of that Texas bill that was notably filibustered sounded reasonable to me
Political and social context is important for the Texas bill and others like it. The relentlessly pursued goal of the "pro-life" movement is to restrict access to abortion. Requiring hospital admitting privileges sounds reasonable on its face, but the stigma faced by abortion providers makes it an onerous burden that is more likely to shut down clinics than to improve the safety of their operations.
I think we should be less squeamish about acknowledging when we're trading off on human lives, particularly those of children.
Alongside bills such as the above, the "pro-life" movement is making every attempt to restrict access to long-lasting low-failure-rate birth control, which is one of the best ways to reduce abortions. They often base their arguments on erroneous claims that such birth control is abortifacient. Even if those claims were supported by evidence, the idea that a single-celled zygote is morally equivalent to (or even anywhere in the neighborhood of) a thinking, self-aware person is absurd.
"Human lives" is an artificial category. What counts as a human life? Why should we care about those things?
I think we should attempt to reduce (and ideally eliminate) these natural miscarriages through funding of medical research, the same way we do e.g. cot death.
There are two important points about these natural miscarriages. The first is the sheer number of them, which certainly would merit medical research and treatment if one considers fetuses morally equivalent or close to persons. The second, however, is not addressed by that proposal. In most cases of early natural miscarriage, the woman did not realize that she was pregnant. Does medical treatment for a fetus warrant, e.g., surveillance of women to ensure that no pregnancies go unnoticed?
Submitted, answering almost all questions.
The hardest question was choosing a single favorite LW post.
Also, I wasn't sure if Worm should count as more than one book. (It didn't end up mattering.)
A scanner + Photoshop makes it significantly easier to measure digit ratios.
killing the same entity inside someone else is just as bad as killing it outside
89% of abortions occur in the first 12 weeks of pregnancy (source). A 12-week-old fetus is not viable outside of the womb.
Also worth noting is that the majority of pregnancies are terminated by natural miscarriage within that 12 week period. In most such cases, the mother has not even realized she was pregnant. (source) Do you consider these natural miscarriages to be the equivalent of human deaths from disease or injury, and if so, what should be done about them?
Are these companies simply wrong and are actually hurting themselves by overextending their human resources?
Yes, unquestionably. We've known how human productivity works for over 100 years now. This knowledge has been "forgotten" due to the effects of tough, largely unprotected labor markets. If the guy at the next desk over stays an hour later than you every day, he'll look like he's working harder, so he'll be less likely to get laid off. Once you have multiple people thinking that way and no opposing structure to encourage cooperation, you get a classic status arms race.
Why Crunch Modes Doesn't Work: Six Lessons
Executive Summary
When used long-term, Crunch Mode slows development and creates more bugs when compared with 40-hour weeks.
More than a century of studies show that long-term useful worker output is maximized near a five-day, 40-hour workweek. Productivity drops immediately upon starting overtime and continues to drop until, at approximately eight 60-hour weeks, the total work done is the same as what would have been done in eight 40-hour weeks.
In the short term, working over 21 hours continuously is equivalent to being legally drunk. Longer periods of continuous work drastically reduce cognitive function and increase the chance of catastrophic error. In both the short- and long-term, reducing sleep hours as little as one hour nightly can result in a severe decrease in cognitive ability, sometimes without workers perceiving the decrease.
Managers decide to crunch because they want to be able to tell their bosses "I did everything I could." They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven't really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done.
Another good article about the history of the 40-hour work week is Why We Have to Go Back to a 40-Hour Work Week to Keep Our Sanity on AlterNet. I recognize the political leaning of AlterNet may be offputting to some, so consider yourselves warned. I also don't necessarily endorse their theory that Asperger's Syndrome is to blame for the rise of overwork in Silicon Valley.
Suicide is indeed often an impulsive act, in which the urge must coincide with the means.
Stronger evidence for this claim:
The use of firearms is a common means of suicide. We examined the effect of a policy change in the Israeli Defense Forces reducing adolescents' access to firearms on rates of suicide. Following the policy change, suicide rates decreased significantly by 40%. Most of this decrease was due to decrease in suicide using firearms over the weekend. There were no significant changes in rates of suicide during weekdays. Decreasing access to firearms significantly decreases rates of suicide among adolescents. The results of this study illustrate the ability of a relatively simple change in policy to have a major impact on suicide rates.
FINDINGS: Firearm suicides were clearly the most frequent means of suicide. They were also used in 30.0% of domestic homicides, although other means were used at similar rates. Firearms for suicide were mainly used by men, especially army weapons. These men were younger, professionally better qualified, and fewer had ever been treated in one of the local state psychiatric services.
This is qualitatively a good point, but quantitatively you should be careful. There are only ~7 people in the world who are 6 sigmas above the mean (using a normal distribution).
I'm also a physics grad student (experimental high energy) who is considering industry jobs in addition to postdocs. I've attended several career panels in the past few years. Most recently, a panel was held at Fermilab. One of the panelists started a blog, Science Jobs Headquarters, where you can read about that panel and get other good advice.
A few of my takeaways from the panel: 1) Python is really useful and everyone should learn it. (I need to work on taking this advice. I mostly develop in C++, and my Python is patchy.) 2) Some companies want to hire people with very specific skills and experience, but other companies are just looking for smart people who can learn on the job. The important point here is that few skills are absolutely essential to get a job in data science/private research/consulting/etc. Even if you're not a Python whiz, there are still people looking to hire you. 3) The website Glassdoor was recommended for investigating companies at which you might want to work.
You should act in a way that, if everyone acted that way, things would work out.
— Louis C.K.
Taken, answering all of the questions I was capable of answering. I will be very interested to see the results on some of the new questions. (The shifts on existing questions could also be interesting, but I don't expect much to change.)
This reminds me of the researcher's maxim:
A month in the laboratory can often save an hour in the library.
— Frank Westheimer
Note: The discrepancy in spelling ("ageing" vs. "aging") is in the original.
To indicate this more concisely, you can put [sic] after "Ageing" in the quote.
Holden: Oh, my God!
Buffy: Oh, your God what?
Holden: Oh, well, you know, not my God, because I defy him and all of his works, but—Does he exist? Is there word on that, by the way?
Buffy: Nothing solid.
— "Buffy the Vampire Slayer" Season 7, Episode 7 "Conversations with Dead People"
You can watch/listen to Arkani-Hamed's recent talk at SUSY 2013. At around 2:00, he says:
locality and unitarity emerging just as algebraic and geometric properties of this object
At around 6:00, a written slide describes his strategy:
Reformulate QFT, Eviscerating Locality + Unitarity -> see them arise as emergent phenomena
He goes on to discuss this subject in more detail.
Also, (somewhat technical) slides from his former student have a section called "Emergent Locality and Unitarity".
There's also title text (often called a tool tip) which appears when you hover the mouse over an image, but is a plain HTML feature.
As a senior in high school, I had the option to take two different computer science courses.
Option 1: AP Computer Science A, taught at my high school. The teacher was one of my school's math teachers who had some programming experience. (My school had not actually offered a comp sci course since I started there, even though Intro to Java was on the books.)
Option 2: An independent study in computer science, taught at the local vocational high school. The teacher had a master's degree in computer science from Brown and had worked for Macromedia/Adobe. (She was also the daughter of my school district's Director of Technology, whom I knew as a student representative to the Technology Committee.)
On the surface, Option 1 looks better for college admission, since it's an AP course. There may also be some perceived bias against vocational schools. However, I chose Option 2. This proved to be the superior choice. I had already taught myself basic programming skills, and the independent nature of the course meant I was able to learn at my own pace and study different topics with a knowledgeable teacher.
When I started college, it turned out that the AP Comp Sci A test wasn't even worth any course credit. Actually, the Computer Science department did not require Computer Science I as a prerequisite for more advanced courses, assuming that if a student could pass Computer Science II, they didn't need to take the previous course. Choosing the better course allowed me to get a jump-start on learning more once I got to college. Although I did not end up completing my intended computer science minor due to too many course conflicts with my physics major, I still found it useful to have an advantage from my high school course. I continue to use the lessons I learned from my high school teacher (who excelled at teaching object-oriented programming and data structures) in my current software/programming-heavy research on the CMS experiment.
Full disclosure: the non-AP course did not contribute to my weighted GPA or class rank because I took it in the last semester of my senior year. The last semester was not counted since rankings had to be decided before the semester ended, both for reporting to colleges and for the purpose of valedictory and salutatory addresses during graduation.
Kevin’s school offers a molecular biology elective during second semester, which is not an honors or AP course. Kevin would like to take the elective during the second semester of his junior year, in addition to his other coursework, but he knows that doing so would lower his GPA, so he decides not to.
In Kevin’s story, the class ranking system was poorly designed: it rewarded some students for achieving less rather than for achieving more. The colleges that Kevin applied to were relying on a faulty measure of quality.
Taking a non-honors or AP course only harms one's GPA (in this ranking system) if it replaces an honors or AP course. There have to be enough honors or AP courses offered to fill a student's entire schedule in order for this to be the case.
Ranking systems which do not weight honors or AP courses can also encourage students to achieve less. This can even happen when honors courses of different difficulties end up with the same weighting.
I think the real lesson to draw from such examples is that creating a measure by taking information which exists in a multidimensional space and projecting it into a single dimension can lead to perverse incentives. (A similar idea is mentioned in another comment in this thread, but I thought it worth pointing out the general principle.)
So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step
This is a great example of how human value is complicated. Optimizing for stated or obvious values can miss unstated or subtler values. Before we can figure out how to get what we want, we have to know what we want. I'm glad CFAR is taking this into account.
We could just run through the whole list of Things I Won't Work With.
For perfect prediction of the universe, the universe must be COMPLETELY simulated. The mechanism to simulate the universe must have memory sufficient to store the state of the universe completely. But that storage mechanism must then store its own state completely, PLUS the rest of the universe. And of course inside the state stored, must be a complete copy of the stored information, PLUS the rest of the universe.
The mechanism can just store a reference to itself.
It's plausible that having one's Patronus dispelled by one's future self is not as noticeable as having one's Patronus countered by a killing curse.
Alternatively, an even simpler option is that it was still Present-Harry's patronus, just given updated instructions by Future-Harry.
When asked to find Hermione, why would Harry's Patronus have found a simulacrum instead of the real one?
The Patronus that came back to Harry could be Future-Harry's Patronus, if time travel is involved.
Note: I don't personally place a high probability on theories involving time travel in this instance, but they do present a possible explanation for that objection.
And reading a little further than that...
The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong effect on success in early learning to program but background programming experience, on the other hand, has little or no effect.
The 2006 study that claimed that humans divide neatly into "natural computer programmers" and "everyone else" failed to replicate in 2008 on a larger population of students.
This is an incomplete and inaccurate summary of the research. Further work has been done, and a revised test shows significant success:
Meta-analysis of the effect of consistency on success in early learning of programming (pdf)
Abstract: A test was designed that apparently examined a student's knowledge of assignment and sequence before a first course in programming but in fact was designed to capture their reasoning strategies. An experiment found two distinct populations of students: one could build and consistently apply a mental model of program execution; the other appeared either unable to build a model or to apply one consistently. The first group performed very much better in their end-ofcourse examination than the second in terms of success or failure. The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong effect on success in early learning to program but background programming experience, on the other hand, has little or no effect.
The previous research and the test itself can be found on this page.
you could still get a quantum noise generator hooked up
In case anybody needs one: ANU Quantum Random Numbers Server
Norbert Weiner, a mathematician from MIT, postulated unfriendly AI in 1949.
The possibility of learning may be built in by allowing the taping to be re-established in a new way by the performance of the machine and the external impulses coming into it, rather than having it determined by a closed and rigid setup, to be imposed on the apparatus from the beginning.
...
Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.
Have you seen the comments by kalla724 in this thread?
Edit: There's some further discussion here.
In all honesty, I haven't even read the study, because I can't find the full text online
Here it is (pdf link).
primarily via cancer.
Also heart disease, stroke, and emphysema.
Relevant article: Right vs. Pragmatic
There was no chance the signs would ever work. The people who threw paper towels on the floor knew that it was “wrong”. Maybe their desire to avoid touching the doorknob was stronger than their desire to do the “right” thing every time. Or maybe they just didn’t give a damn about making the bathroom slightly worse for someone else to make it slightly better for themselves. Either way, a sign’s not going to solve the problem, because the problem isn’t that they didn’t know the right thing to do. They knew what they were doing, and for whatever reason, they didn’t care.
This problem wasn’t solved by the time I left that office. It probably still isn’t.
The pragmatic way to solve the problem would have been to adapt to what these people were going to do anyway: just put another trash can by the door.
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we've got working implants that replicate parts of hippocampal and cerebellar function for rats.
The hippocampal implant has been extended to monkeys.
The initial scenario seems contrived. Your calculation essentially just expresses the mathematical fact that there is a small difference between the numbers 49.99 and 50, which becomes larger when multiplied by 7 billion minus one. What motivates this scenario? How is it realistic or worth considering?