Posts
Comments
Strongly agree. I would be happy to help. Here are three academic AI alignment articles I have co-authored. https://arxiv.org/abs/2010.02911 https://arxiv.org/abs/1906.10536 https://arxiv.org/abs/2003.00812
While not captured by the outside view, I think the massive recent progress in machine learning should give us much hope of achieving LEV in 30 years.
Yes, the more people infected with the virus, and the longer the virus is in people the more time for a successful mutation to arise.
I did a series of podcasts on COVID with Greg Cochran and Greg was right early on. Greg has said from the beginning that the risk of a harmful mutation is reasonably high because the virus is new meaning there are likely lots of potential beneficial mutations (from the virus's viewpoint) that have not yet been found.
https://soundcloud.com/user-519115521
From an AI safety viewpoint, this might greatly increase AI funding and drive talent into the field and so advance when we get a general artificial superintelligence.
Yes for high concentration of observers, and if high tech civilizations have strong incentives to grab galactic resources as quickly as they can thus preventing the emergence of other high tech civilizations, most civilizations such as ours will exist in universes that have some kind of late great filter to knock down civilizations before they can become spacefaring.
Thanks, that's a very clear explanation.
At the end of Section 5.3 the authors write "So far, we have assumed that we can derive no information on the probability of intelligent life from our own existence, since any intelligent observer will inevitably find themself in a location where intelligent life successfully emerged regardless of the probability. Another line of reasoning, known as the “Self-Indication Assumption” (SIA), suggests that if there are different possible worlds with differing numbers of observers, we should weigh those possibilities in proportion to the number of observers (Bostrom, 2013). For example, if we posit only two possible universes, one with 10 human-like civilizations and one with 10 billion, SIA implies that all else being equal we should be 1 billion times more likely to live in the universe with 10 billion civilizations. If SIA is correct, this could greatly undermine the premises argued here, and under our simple model it would produce high probability of fast rates that reliably lead to intelligent life (Fig. 4, bottom)...Adopting SIA thus will undermine our results, but also undermine any other scientific result that would suggest a lower number of observers in the Universe. The plausibility and implications of SIA remain poorly understood and outside the scope of our present work."
I'm confused, probably because anthropic effects confuse me and not because the authors made a mistake. But don't the observer selection effects the paper uses derive information from our own existence, and if we make use of these effects shouldn't we also accept the implications of SIA? Should rejecting SIA because it results in some bizarre theories cause us to also have less trust in observer selection effects?
Not that I recall.
In 2007 I wrote an article for Inside Higher Ed advocating that "institutions should empower graduating seniors to reward teaching excellence. Colleges should do this by giving each graduating senior $1,000 to distribute among their faculty. Colleges should have graduates use a computer program to distribute their allocations anonymously."
https://insidehighered.com/views/2007/09/07/beyond-merit-pay-and-student-evaluations
In an accident something from your car could hit you in the head even if you have an airbag. For example, the collusion could cause your head to hit a side window
The helmet I linked to is light and doesn't block your vision so I don't see how it could do any harm. It would do a lot of good if you were wearing it when your head collided with something.
Do you wear a helmet when in a car? I do.
Think of mutational load as errors. Reducing errors in the immune system's genetic code should decrease the risk of pandemics. Reducing errors in people's brains should greatly increase the quality of intellectual output. Hitting everyone in the head with a hammer a few times could, I suppose, through an extraordinarily lucky hit cause someone to produce something good that they otherwise wouldn't but most likely the hammer blows (analogous to mutational load) just gives us bad stuff.
The best way to radically increase the intelligence of humans would be to use Greg Cochran's idea of replacing rare genetic variations with common ones thereby greatly reducing mutational load. Because of copying errors, new mutations keep getting introduced into populations, but evolutionary selection keeps working to reduce the spread of harmful mutations. Consequently, if an embryo has a mutation that few other people have it is far more likely that this mutation is harmful than beneficial. Replacing all rare genetic variations in an embryo with common variations would likely result in the eventual creation of a person much smarter and healthier than has ever existed. The primary advantage of Cochran's genetic engineering approach is that we can implement it before we learn the genetic basis of human intelligence. The main technical problem, from what I understand, from implementing this approach is the inability to edit genes with sufficient accuracy, at sufficiently low cost, and with sufficiently low side effects.
Much of the harm of aging is the increased likelihood of getting many diseases such as cancer, heart disease, alzheimer's, and strokes as you age. From my limited understanding, Metformin reduces the age-adjusted chance of getting many of these diseases and thus it's reasonable, I believe, to say that Metformin has anti-aging effects.
Thanks!
Metformin as a rationalist win. For several years I have been taking 2 grams of Metformin a day for anti-aging reasons. There is a vast literature on Metformin and as a mere economist I'm unqualified to summarize it. But my (skin-in-the-game) guess is that all adults over 40 (and perhaps simply all adults) should be taking Metformin and I would love if someone with a bio-background wrote up a Metformin literature review understandable to those of us who understand statistics but not much about medicine. The reason why Metformin might be universally beneficial and yet not generally taken is because no one holds a patent on Metformin (it's cheap), in the US you need a prescription to get it, and the medical system doesn't consider aging to be a disease.
I have bought $400 worth of Trump No contracts on PredictIt which will pay off if Trump loses. The price as of this writing is 61 cents for a contract that pays $1 if Trump loses.
Our estimate of which of the four possibilities is correct is conditional on us living in a universe where we observe that the predictor always guesses correctly. If we put aside cheating (which should almost always be our guess if we observe something happening that seems to defy our understanding of how the universe operates) we should have massive uncertainty concerning how randomness and/or causation operates and thus not assign too low a probability to either (2) or (3).
For next year: Raise $1,000 and convert the money to cash. Setup some device where the money burns if a code is entered, and otherwise the money gets donated to the most effective charity. Have a livestream that shows the cash and will show the fire if the code is entered.
To destroy an aircraft carrier you must first find it and in a war the US would prioritize taking out the enemy's ability to locate our aircraft carriers. Since the carriers move, knowing were one was an hour ago might not be enough information to destroy the carrier. In the next future aircraft carriers might be protected by laser anti-missile systems that could handle having only two seconds to destroy multiple incoming missiles.
Consider the board game Metaforms. It requires you to solve logical puzzles based on colors, shapes, and position.
https://www.amazon.com/FoxMind-Meta-Forms-Puzzle-Solving-Brain-Builder/dp/B0015MC2TO
I got a fantastic answer the first time I tried. I used some of what you wrote as prompt. Part of GPT-3's (Dragon) response was "Now, let's see if I can get you talking about something else. Something more interesting than killing people for no reason."
You might be interested in my co-authored article "An AGI with Time-Inconsistent Preferences."
Sorry no link, but we might do another podcast soon. As to why you should prefer this number, well, Scott Alexander said Greg has "creepy oracular powers".
According to Greg Cochran, NYC and Italy give us the best data and the mortality rate for people who get COVID seems around 1.2% for an age structure similar to the US.
If general intelligence is a polygenic trait it will be normally distributed.
Getting up at the exact same time every day, unless I happen to wake up before my alarm goes off. It seems to have improved my sleep quality.
We are quickly learning how to treat the virus. Your grandparents chances of survival if they get COVID-19 are likely significantly higher if they get it in three months than today. As the virus is new to humans there are likely a lot of "low-hanging fruit" mutations for evolution to find, and the more people the virus is in the more chance it will stumble upon a mutation that makes it better at invading the cells of young people. We don't have a good estimate of how much long-term harm it does to people it doesn't kill. While if you get the virus this year, you will probably be safe from it next year, we don't know this for sure. We don't yet know if viral loads matter and it could be that the rapid initial exponential growth of the virus once it is in you means it really isn't important what your initial exposure is.
The probability of this happening is very low. We have effective coronavirus vaccines for pigs (although not for COVID-19). For most viruses people recover from, they keep immunity and we don't have good evidence that COVID-19 is different. While COVID-19 might do some harm to most people that recover, if the harm was on average significant we should have a lot more evidence of this. Also, the space of possible effective treatments is huge and it seems likely that within 2 years (perhaps even two months) we will be able to greatly improve outcomes for the infected. Finally, keep in mind that we have just started to fight COVID-19, and so we have not already tried and failed with all the obvious approaches and this should make us relatively optimistic about coming up with effective treatments or vaccines.
Amazon Fresh doesn't deliver to my address, but with Amazon Prime, Amazon pantry, and Walmart.com I can still get a lot of food delivered to my door. I put packages in my basement (without touching them) and keep them there for at least 3 days before opening. If you don't have a basement, I suggest you put packages into a large garbage bag and leave them untouched for at least 3 days.
Yes, if you are in public, but probably not if you are in your home.
Don't push yourself too much when you exercise. Hold the railing when you walk downstairs. Get lots of sleep. Don't get intoxicated. Have antibiotic cream in your home.
Greg Cochran told me in one of our podcasts that having the flu probably provides protection against getting COVID19 because having the flu activates your immune system.
Consider getting a humidifier in case someone in your household gets COVID-19, because high humidity might reduce the transmission of the virus.
Lower prices of land in expensive cities. Lots of high income workers are going to experiment with working from home. Some will find it at least as productive as working in an office building. These workers, especially if they have a family, will seriously consider leaving expensive cities.
From a friend who is a microbiologist Phd "Oh and drink plenty of water. My lab group discovered that humans and cows partially rid their body of viruses (at least adenovirus) through their urine. "
What if we update on the age of the universe? Imagine that the normal course of events after a high tech civilization arises is for it to grab all the free energy it can as fast as it can and universes where life forms easily do not have civilizations at our level of development at the current age of the universe.
I have been taking NAC (n-acetylcysteine) as a supplement for a while. You can (still) buy it on Amazon. From an Elsevier press release "The authors draw attention to several randomized clinical studies in humans that have found that over the counter supplements such as n-acetylcysteine (NAC), which is used to treat acetaminophen poisoning and is also used as a mucus thinner to help reduce bronchitis exacerbations, and elderberry extracts, have evidence for shortening the duration of influenza by about two to four days and reducing the severity of the infection". Anecdotally, I stopped taking NAC for a few months and happened to catch a cold. The phlegm took longer to go away than normal and I happened to read that NAC, which I still had, helped with phlegm, so I started taking NAC again and my phlegm problem quickly went away, at a faster rate than it had been.
If the hospitals get overwhelmed and a family member in my home gets critically ill, what should I do to help them? Are there good YouTube videos that will teach me the basics of caring for someone with whatever lung problems the virus can cause absent my having medical equipment?
My economics department is hiring a macroeconomist this year. A huge number of applicants are submitting statements of teaching and diversity in which they describe how if hired they will promote diversity in their teaching.
As the left have taken over most colleges, I think that only thing that could stop them would be if colleges faced tremendous economic pressure because, say, online education or drastic cuts in government funds threatened the financial position of the colleges and they were forced to become more customer oriented, more oriented to producing scientific gains or to enhancing the future income of their students. Right now, elite colleges especially are in a very comfortable financial position and so face no pressure to take actions their leaders would consider distasteful which would include becoming more open to non-leftist views. I haven't written on this.
I agree with you on x-risks. I think one of our best paths to avoiding them would be to use genetic engineering to create very smart and moral people, but most of academia hates the possibility that genes could have anything to do with intelligence or morality.
I was initially denied tenure but appealed claiming that two members of my department voted against me for political reasons. My college's five person Grievance Committee unanimously ruled in my favor and I came up for tenure again and that time was granted it. I wrote about it here: https://www.forbes.com/forbes/2004/0607/054.html#d70ce6c6e9f1
Yes, in many fields you could hide your politically incorrect beliefs and not be harmed by them so long as you can include a statement in your tenure file of how you will work to increase diversity as defined by leftists.
I think it is getting worse in that people who have openly politically incorrect beliefs are now being considered racist. I don't see the trend reversing unless the economics of higher education change.
I was very, very wrong.
Most academics don't take politically incorrect positions. If you don't have tenure doing so would be very dangerous. If you do, it could make it much harder to move to a higher ranked school, but it is very difficult to fire tenured professors for speech. One way to move up in academics is to take staff positions as a dean, provost, or college president. Taking politically incorrect positions likely completely forecloses this path.
Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.
While you might be right, it's also possible that von Neumann doesn't have a contemporary peer. Apparently top scientists who knew von Neumann considered von Neumann to be smarter than the other scientists they knew.
Yes, I am referring to "IQ" not g because most people do not know what g is. (For other readers ,IQ is the measurement, g is the real thing.) I have looked into IQ research a lot and spoken to a few experts. While genetics likely doesn't play much of a role in the Flynn effect, it plays a huge role in g and IQ. This is established beyond any reasonable doubt. IQ is a very politically sensitive topic and people are not always honest about it. Indeed, some experts admit to other experts that they lie about IQ when discussing IQ in public (Source: my friend and podcasting partner Greg Cochran. The podcast is Future Strategist.). We don't know if the Flynn effect is real, it might just come from measurement errors arising from people becoming more familiar with IQ-like tests, although it also could reflect real gains in g that are being captured by higher IQ scores. There is no good evidence that education raises g. The literature on IQ is so massive, and so poisoned by political correctness (and some would claim racism) that it is not possible to resolve the issues you raise by citing literature. If you ask IQ experts why they disagree with other IQ experts they will say that the other experts are idiots/liars/racists/cowards. I interviewed a lot of IQ experts when writing my book Singularity Rising.
Most likely von Neumann had a combination of (1) lots of additive genes that increased intelligence, (2) few additive genes that reduced intelligence, (3) low mutational load, (4) a rare combination of non-additive genes that increased intelligence (meaning genes with non-linear effects) and (5) lucky brain development. A clone would have the advantages of (1)-(4). While it might in theory be possible to raise IQ by creating the proper learning environment, we have no evidence of having done this so it seems unlikely that this was the cause of von Neumann having high intelligence.