A brief history of ethically concerned scientists
post by Kaj_Sotala · 2013-02-09T05:50:00.045Z · LW · GW · Legacy · 143 commentsContents
Pre-industrial inventors Chemical warfare Nuclear weapons Recombinant DNA Informatics Conclusion Sources used None 143 comments
For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.
-- Norbert Wiener (1956), Moral Reflections of a Mathematician.
Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. For someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous. Throughout history, many scientists and inventors have recognized this, and taken different kinds of action to help ensure that their work will have beneficial consequences. Here are some of them.
This post is not arguing that any specific approach for taking responsibility for one's actions is the correct one. Some researchers hid their work, others refocused on other fields, still others began active campaigns to change the way their work was being used. It is up to the reader to decide which of these approaches were successful and worth emulating, and which ones were not.
Pre-industrial inventors
… I do not publish nor divulge [methods of building submarines] by reason of the evil nature of men who would use them as means of destruction at the bottom of the sea, by sending ships to the bottom, and sinking them together with the men in them.
People did not always think that the benefits of freely disseminating knowledge outweighed the harms. O.T. Benfey, writing in a 1956 issue of the Bulletin of the Atomic Scientists, cites F.S. Taylor’s book on early alchemists:
Alchemy was certainly intended to be useful .... But [the alchemist] never proposes the public use of such things, the disclosing of his knowledge for the benefit of man. …. Any disclosure of the alchemical secret was felt to be profoundly wrong, and likely to bring immediate punishment from on high. The reason generally given for such secrecy was the probable abuse by wicked men of the power that the alchemical would give …. The alchemists, indeed, felt a strong moral responsibility that is not always acknowledged by the scientists of today.
With the Renaissance, science began to be viewed as public property, but many scientists remained cautious about the way in which their work might be used. Although he held the office of military engineer, Leonardo da Vinci (1452-1519) drew a distinction between offensive and defensive warfare, and emphasized the role of good defenses in protecting people’s liberty from tyrants. He described war as ‘bestialissima pazzia’ (most bestial madness), and wrote that ‘it is an infinitely atrocious thing to take away the life of a man’. One of the clearest examples of his reluctance to unleash dangerous inventions was his refusal to publish the details of his plans for submarines.
Later Renaissance thinkers continued to be concerned with the potential uses of their discoveries. John Napier (1550-1617), the inventor of logarithms, also experimented with a new form of artillery. Upon seeing its destructive power, he decided to keep its details a secret, and even spoke from his deathbed against the creation of new kinds of weapons.
But only concealing one discovery pales in comparison to the likes of Robert Boyle (1627-1691). A pioneer of physics and chemistry and possibly the most famous for describing and publishing Boyle’s law, he sought to make humanity better off, taking an interest in things such as improved agricultural methods as well as better medicine. In his studies, he also discovered knowledge and made inventions related to a variety of potentially harmful subjects, including poisons, invisible ink, counterfeit money, explosives, and kinetic weaponry. These ‘my love of Mankind has oblig’d me to conceal, even from my nearest Friends’.
Chemical warfare
By the early twentieth century, people had began looking at science in an increasingly optimistic light: it was believed that science would not only continue to improve everyone’s prosperity, but also make war outright impossible. But as science became more sophisticated, it would also become possible to cause ever more harm with ever smaller resources. One of the early indications of science’s ability to do harm came from advances in chemical warfare, and World War I saw the deployment of chlorine, phosgene, and mustard gas as weapons. It should not be surprising, then, that some scientists in related fields began growing concerned. But unlike earlier inventors, at least three of them did far more than just refuse to publish their work.
Clara Immerwahr (1870-1915) was a German chemist and the first woman to obtain a PhD from the University of Breslau. She was strongly opposed to the use of chemical weapons. Married to Fritz Haber, ‘the father of chemical warfare’, she unsuccessfully attempted many times to convince her husband to abandon his work. Immerwahr was generally depressed and miserable over the fact that society considered a married woman’s place to be at home, denying her the opportunity to do science. In the end, after her efforts to dissuade her husband from working on chemical warfare had failed and Fritz had personally overseen the first major use of chlorine, she committed suicide by shooting herself in the heart.
Poison gas also concerned scientists in other disciplines. Lewis Fry Richardson (1881-1953) was a mathematician and meteorologist. During the World War II, the military became interested in his work on turbulence and gas mixing, and attempted to recruit him to do help them do work on modeling the best ways of using poison gas. Realizing what his work was being used for, Richardson abandoned meteorology entirely and destroyed his unpublished research. Instead, he turned his research to investigating the causes of war, attempting to find ways to reduce the risk of armed conflict. He spent the rest of his life devoted to this topic, and is today considered one of the founders of the scientific analysis of conflict.
Arthur Galston (1920-2008), a botanist, was also concerned with the military use of his inventions. Building upon his work, the US military developed Agent Orange, a chemical weapon which was deployed in the Vietnam War. Upon discovering what his work had been used for, he began to campaign against its use, and together with a number of others finally convinced President Nixon to order an end to its spraying in 1970. Reflecting upon the matter, Galston wrote:
I used to think that one could avoid involvement in the antisocial consequences of science simply by not working on any project that might be turned to evil or destructive ends. I have learned that things are not all that simple, and that almost any scientific finding can be perverted or twisted under appropriate societal pressures. In my view, the only recourse for a scientist concerned about the social consequences of his work is to remain involved with it to the end. His responsibility to society does not cease with publication of a definitive scientific paper. Rather, if his discovery is translated into some impact on the world outside the laboratory, he will, in most instances, want to follow through to see that it is used for constructive rather than anti-human purposes.
After retiring in 1990, he founded the Interdisciplinary Center for Bioethics at Yale, where he also taught bioethics to undergraduates.
Nuclear weapons
While chemical weapons are capable of inflicting serious injuries as well as birth defects on large numbers of people, they have never been viewed to be as dangerous as nuclear weapons. As physicists became capable of creating weapons of unparalleled destructive power, they also began growing ever more concerned about the consequences of their work.
Leó Szilárd (1898-1964) was one of the first people to envision nuclear weapons, and was granted a patent for the nuclear chain reaction in 1934. Two years later, he grew worried that Nazi scientists would find his patents and use them to create weapons, so he asked the British Patent Office to withdraw his patents and secretly reassign them to the Royal Navy. His fear of Nazi Germany developing nuclear weapons also made him instrumental in making the USA initiate the Manhattan Project, as he and two other scientists wrote the Einstein-Szilárd letter that advised President Roosevelt of the need to develop the same technology. But in 1945, he learned that the atomic bomb was about to be used on Japan, despite it being certain that neither Germany nor Japan had one. He then did his best to stop them from being used and started a petition against using them, with little success.
After the war, he no longer wanted to contribute to the creation of weapons and changed fields to molecular biology. In 1962, he founded the Council for a Livable World, which aimed to warn people about the dangers of nuclear war and to promote a policy of arms control. The Council continues its work even today.
Another physicist who worked on the atomic bomb due to a fear of it being developed by Nazi Germany was Joseph Rotblat (1908-2005), who felt that the Allies also having an atomic bomb would deter the Axis from using one. But he gradually began to realize that Nazi Germany would likely never develop the atomic bomb, destroying his initial argument for working on it. He also came to realize that the bomb continued to be under active development due to reasons that he felt were unethical. In conversation, General Leslie Groves mentioned that the real purpose of the bomb was to subdue the USSR. Rotblat was shocked to hear this, especially given that the Soviet Union was at the time an ally in the war effort. In 1944, it became apparent that Germany would not develop the atomic bomb. As a result, Rotblat asked for permission to leave the project, and was granted it.
Afterwards, Rotblat regretted his role in developing nuclear weapons. He believed that the logic of nuclear deterrence was flawed, since he thought that if Hitler had possessed an atomic bomb, then Hitler’s last order would have been to use it against London regardless of the consequences. Rotblat decided to do whatever he could to prevent the future use and deployment of nuclear weapons, and proposed a worldwide moratorium on such research until humanity was wise enough to use it without risks. He decided to repurpose his career into something more useful for humanity, and began studying and teaching the application of nuclear physics into medicine, becoming a professor at the Medical College of St Bartholomew’s Hospital in London.
Rotblat worked together with Bertrand Russell to limit the spread of nuclear weapons, and the two collaborated with a number of other scientists to issue the Russell-Einstein Manifesto in 1955, calling the governments of the world to take action to prevent nuclear weapons from doing more damage. The manifesto led to the establishment of the Pugwash Conferences, in which nuclear scientists from both the West and the East met each other. By facilitating dialogue between the two sides of the Cold War, these conferences helped lead to several arms control agreements, such as the Partial Test Ban Treaty of 1963 and the Non-Proliferation Treaty of 1968. In 1995, Rotblat and the Pugwash Conferences were awarded the Nobel Peace Prize “for their efforts to diminish the part played by nuclear arms in international politics and, in the longer run, to eliminate such arms”.
The development of nuclear weapons also affected Norbert Wiener (1894-1964), professor of mathematics at the Massachusetts Institute of Technology and the originator of the field of cybernetics. After the Hiroshima bombing, a researcher working for a major aircraft corporation requested a copy of an earlier paper of Wiener’s. Wiener refused to provide it, and sent Atlantic Monthly a copy of his response to the researcher, in which he declared his refusal to share his research with anyone who would use it for military purposes.
In the past, the community of scholars has made it a custom to furnish scientific information to any person seriously seeking it. However, we must face these facts: The policy of the government itself during and after the war, say in the bombing of Hiroshima and Nagasaki, has made it clear that to provide scientific information is not a necessarily innocent act, and may entail the gravest consequences. One therefore cannot escape reconsidering the established custom of the scientist to give information to every person who may inquire of him. The interchange of ideas, one of the great traditions of science, must of course receive certain limitations when the scientist becomes an arbiter of life and death. [...]
The experience of the scientists who have worked on the atomic bomb has indicated that in any investigation of this kind the scientist ends by putting unlimited powers in the hands of the people whom he is least inclined to trust with their use. It is perfectly clear also that to disseminate information about a weapon in the present state of our civilization is to make it practically certain that that weapon will be used. [...]
If therefore I do not desire to participate in the bombing or poisoning of defenseless peoples-and I most certainly do not-I must take a serious responsibility as to those to whom I disclose my scientific ideas. Since it is obvious that with sufficient effort you can obtain my material, even though it is out of print, I can only protest pro forma in refusing to give you any information concerning my past work. However, I rejoice at the fact that my material is not readily available, inasmuch as it gives me the opportunity to raise this serious moral issue. I do not expect to publish any future work of mine which may do damage in the hands of irresponsible militarists.
I am taking the liberty of calling this letter to the attention of other people in scientific work. I believe it is only proper that they should know of it in order to make their own independent decisions, if similar situations should confront them.
Recombinant DNA
For a large part of history, scientists’ largest ethical concerns came from direct military applications of their inventions. While any invention could lead to unintended societal or environmental consequences, for the most part researchers who worked on peaceful technologies didn’t need to be too concerned with their work being dangerous by itself. But as biological and medical research obtained the capability to modify genes and bacteria, it would open up the possibility of unintentionally creating dangerous infectious diseases. In theory, these could be even more dangerous than nuclear weapons - an a-bomb dropped on a city might destroy most of that city, but a single bacteria could give rise to an epidemic infecting people all around the world.
Recombinant DNA techniques involve taking DNA from one source and then introducing it to another kind of organism, causing the new genes to express themselves in the target organism. One of the pioneers of this technique was Paul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium. This eventually led to two conferences in Asilomar, with 140 experts participating in the later 1975 one to decide upon guidelines for recombinant DNA research.
Berg and Pollack were far from the only scientists to call attention to the safety concerns of recombinant DNA. Several other scientists contributed, asking for more safety and voicing concern about a technology that could bring harm if misused.
Among them, the molecular biologist Maxine Singer (1931-) chaired the 1973 Gordon Conference on Nucleic Acids, in which some of the dangers of the technique were discussed. After the conference, she and several other similarly concerned scientists authored a letter to the President of the National Academy of Science and the President of the Institutes of Health. The letter suggested that a study committee be established to study the risks behind the new recombinant DNA technology, and propose specific actions or guidelines if necessary. She also helped organize the Asilomar Conference in 1975.
Informatics
But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.
-- Bill Joy, Why the Future Doesn’t Need Us.
Finally, we come to the topic of information technology and artificial intelligence. As AI systems grow increasingly autonomous, they might become the ultimate example of a technology that seems initially innocuous but ends up capable of doing great damage. Especially if they were to become capable of rapid self-improvement, they could lead to humanity going extinct.
In addition to refusing to help military research, Norbert Wiener was also concerned about the effects of automation. In 1949, General Electric wanted him to advise its managers on automaton matters and to teach automation methods to its engineers. Wiener refused these requests, believing that they would further a development which would lead to human workers becoming unemployed and replaced by machines. He thus expanded his boycott of the military to also be a boycott of corporations that he thought acted unethically.
Wiener was also concerned about the risks of autonomous AI. In 1960, Science published his paper "Some Moral and Technical Consequences of Automation", in which he spoke at length about the dangers of machine intelligence. He warned that machines might act far too fast for humans to correct their mistakes, and that like genies in stories, they could fulfill the letter of our requests without caring about their spirit. He also discussed such worries elsewhere.
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.
Such worries would continue to bother other computer scientists as well, many decades after Wiener’s death. Bill Joy (1954-) is known for having played a major role in the development of BSD Unix, having authored the vi text editor, and being the co-founder of Sun Microsystems. He became concerned about the effects of AI in 1998, when he met Ray Kurzweil at a conference where they were both speakers. Kurzweil gave Joy a preprint of his then-upcoming book, The Age of Spiritual Machines, and Joy found himself concerned over its discussion about the risks of AI. Reading Hans Moravec’s book Robot: Mere Machine to Transcendent Mind exacerbated Joy’s worries, as did several other books which he found around the same time. He began to wonder whether all of his work in the field of information technology and computing had been preparing the way for a world where machines would replace humans.
In 2000, Joy wrote a widely-read article titled Why the Future Doesn’t Need Us for Wired, talking about the dangers of AI as well as genetic engineering and nanotechnology. In the article, he called to limit the development of technologies which he felt were too dangerous. Since then, he has continued to be active in promoting responsible technology research. In 2005, an op-ed co-authored by Joy and Ray Kurzweil was published in the New York Times, arguing that the decision to publish the genome of the 1918 influenza virus on the Internet had been a mistake.
Joy also attempted to write a book on the topic, but then became convinced that he could achieve more by working on science and technology investment. In 2005, he joined the venture capital firm Kleiner Perkins Caufield & Byers as a partner, and he has been focused on investments in green technology.
Conclusion
Technology's potential for destruction will only continue to grow, but many of the social norms of science were established under the assumption that scientists don’t need to worry much about how the results of their work are used. Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work.
Sources used
This article was written based on research done by Vincent Fagot. The sources listed below are in addition to any that are already linked from the text.
Leonardo da Vinci:
- “The Notebooks of Leonardo da Vinci” vol 1, by Edward Mac Curdy (1905 edition)
- “The scientist’s conscience : historical considerations” in Bulletin of the Atomic Scientists - May 1956 - Page 177
John Napier:
- “The scientist’s conscience : historical considerations” in Bulletin of the Atomic Scientists - May 1956 - Page 177.
- Rosemary Chalk : Drawing the Line An Examination of Conscientious Objection in Science.
Robert Boyle:
- Secrets and Knowledge in Medicine and Science, 1500-1800 by Elaine Leong and Alisha Rankin, pp 87-104
- Dictionary of National Biography - volume 06, 1886 edition, "Robert Boyle" entry around pp 118-123
- Robert Boyle Reconsidered by Michael Hunter
Clara Immerwahr:
- Rhodes : The Making of the Atomic Bomb
- Jan Apotheker, Livia Simon Sarkadi and Nicole J. Moreau : European Women in Chemistry
- John Cornwell : Hitler's Scientists: Science, War, and the Devil's Pact
Lewis Fry Richardson:
- T. W. Körner : The Pleasures of Counting
- “The scientist’s conscience : historical considerations” in Bulletin of the Atomic Scientists - May 1956 - Page 177
Arthur Galston:
- Galston A. : An Accidental Plant Biologist, Plant Physiology March 2002 vol. 128 no. 3
- “Science and Social Responsibility: A Case History.” Annals of the New York Academy of Sciences. Vol 196, Article 4
Leó Szilárd:
Joseph Rotblat:
- Bulletin of the atomic scientists, august 1985 : Leaving the Bomb Project by Joseph Rotblat
- Keeper of the Nuclear Conscience: The Life and Work of Joseph Rotblat
- 1999 voice record interview of Joseph Rotblat
- Deriving an Ethical Code for Scientists: An Interview With Joseph Rotblat
Norbert Wiener:
- Postmodern War: The New Politics of Conflict by Chris Hables Gray (available online)
- Bulletin of the Atomic Scientists - May 1956 - Page 178 : “The scientist’s conscience : historical considerations
- Dark Hero of the Information Age: In Search of Norbert Wiener The Father of Cybernetics by Flo Conway
- Bulletin of the Atomic Scientists - Jan 1947 - Page 31 : "A Scientist Rebels"
- Bulletin of the Atomic Scientists - Feb 1956 - Page 53 : "Moral Reflections of a Mathematician"
- Bulletin of the Atomic Scientists - Nov 1948 - Page 338 : "A Rebellious Scientist After Two Years"
- Rosemary Chalk : Drawing the Line An Examination of Conscientious Objection in Science
- Some Moral and Technical Consequences of Automation in Science, 6 may 1960
Paul Berg, Maxine Singer, Robert Pollack:
- P. Berg and M. F. Singer : The recombinant DNA controversy: twenty years later, Proc Natl Acad Sci U S A. 1995 September 26
- Potential Biohazards of Recombinant DNA Molecules by Paul Berg, 1974
- Guidelines for DNA Hybrid Molecules by Maxine Singer, 1973
- Biomedical Politics by Kathi E. Hanna (chapter : Asilomar and recombinant DNA by Donald S. Fredrickson), 1991
- Asilomar Conference on Laboratory Precautions When Conducting Recombinant DNA Research – Case Summary
- Report - Assembly of Life Sciences, National Research Council
- P. Berg: Potential Biohazards of Recombinant DNA Molecules
- Watson and DNA: Making a Scientific Revolution
143 comments
Comments sorted by top scores.
comment by CronoDAS · 2013-02-09T07:22:13.922Z · LW(p) · GW(p)
Do (incremental) advances in military technology actually change the number of people who die in wars? They might change which people die, or how rapidly, but it seems to me that groups of people who are determined to fight each other are going to do it regardless of what the "best" weapons currently available happen to be. The Mongols wreaked havoc on a scale surpassing World War I with only 13th century technology, and the Rwandan genocide was mostly carried out with machetes. World War I brought about a horror of poison gas, but bullets and explosions don't make people any less dead than poison gas does.
(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)
Replies from: CCC, ewbrownv, V_V, DanArmak, Nornagest, ygert, ikrase↑ comment by CCC · 2013-02-09T19:28:51.876Z · LW(p) · GW(p)
That is very much the right question to ask. How can we best find the answer?
Perhaps a timeline of major wars, together with the casualty figures (both as raw numbers, and as a percentage of estimated combatants) would provide that answer.
Hmmm... of the top ten wars by death toll, according to a Wikipedia list self-described as incomplete, the range of deaths-per-war ranged from 8-12 million (no. 10) to 60-78 million (no. 1, WWII). This is about an eightfold difference. The second war on the list is the 13th-century Mongols, and the earliest on the list is the Warring States Era, in China, in around 400 B.C. (10 million, estimated, 9th on the list).
Glancing over the data, I notice that most of the wars in that list are either world-spanning, or took place in China. This, I imagine, is most likely because China has a large population; thus, there are more people to get involved in, and killed in, a war. A list rearranged by percentage of involved soldiers killed might show a different trend.
I also notice that there is a very wide range of dates; but the century with the most entries in that top-ten list is the twentieth century. That may be influenced by the fact that there were more people around in the 20th century, and also by the scale of some of the conflicts (WWI and WWII, for example).
I'm not sure whether the data supports the hypothesis or not, though. Given the wide range of dates, I'm inclined to think that you may be right; that advances in war change the manner of death more than the number of deaths.
↑ comment by ewbrownv · 2013-02-12T22:59:33.177Z · LW(p) · GW(p)
Good insight.
No, even a brief examination of history makes it clear that the lethality of warfare is almost completely determined by the culture and ideology of the people involved. In some wars the victors try to avoid civilian casualties, while in others they kill all the adult males or even wipe out entire populations. Those fatalities dwarf anything produced in the actual fighting, and they can and have been inflicted with bronze age technology. So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology.
As for the casualty rate of soldiers, that tends to jump up whenever a new type of weapon is introduced and then fall again as tactics change to deal with it. In the long run the dominant factor is again a matter of ideology - an army that tries to minimize casualties can generally do so, while one that sees soldiers as expendable will get them killed in huge numbers regardless of technology.
(BTW, WWI gases are nothing unusual in the crippling injury department - cannons, guns, explosives and edged weapons all have a tendency to litter the battlefield with crippled victims as well. What changed in the 20th century was that better medical meant a larger fraction of crippled soldiers to survive their injuries to return to civilian life.)
Replies from: Troshen↑ comment by Troshen · 2013-02-25T22:39:27.995Z · LW(p) · GW(p)
"So anyone interested making war less lethal would be well advised to focus on spreading tolerant ideologies rather than worrying about weapon technology."
This is actually one of the major purposes that Christians have had in doing missionary work - to spread tolerance and reduce violence. I assume it's happened in other religions too. For example, the rules of chivalry in the middle ages were an attempt to moderate the violence and abuses of the warriors.
↑ comment by V_V · 2013-02-09T15:29:03.277Z · LW(p) · GW(p)
(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)
Bullets and explosions don't necesarily kill.
↑ comment by DanArmak · 2013-02-09T12:26:12.140Z · LW(p) · GW(p)
Some actual or hypothetical advances in military technology allow very widespread, imprecise destruction. Such destruction could kill big segments of the enemy state's civilian population, or of a population in which a guerrila army is embedded, as a side effect of killing soldiers.
For instance sufficiently powerful or numerous bombs can destroy large cities. Pathogens can kill or sicken an entire population (with the attacker distributing a vaccine or cure among their own population only). Damage to infrastructure can kill those who depend on it.
Replies from: ikrase↑ comment by ikrase · 2013-02-12T15:24:32.483Z · LW(p) · GW(p)
Notably, the two World Wars introduced the mass use of mechanized units and heavy weapons leading to a huge amount of infrastructure damage.
Replies from: DanArmak↑ comment by DanArmak · 2013-02-12T19:39:22.587Z · LW(p) · GW(p)
On the other hand, a century or two previously little infrastructure existed outside cities. Railways, electricity lines and power plants, car-quality roads, oil and gas pipelines, even most roads or trans-city water and sewage systems are modern inventions.
↑ comment by Nornagest · 2013-02-09T08:14:07.756Z · LW(p) · GW(p)
My armchair impression is that advances in military technology can lead to higher casualty rates when tactics haven't caught up, but that once they do the death toll regresses to the mean pretty quick. Two examples: Minié balls greatly increased the accuracy and effective range of quick-loading small arms (rifling had been around for a while, but earlier muzzle-loading rifles took much longer to load), essentially rendering Napoleonic line tactics obsolete, but it took decades and two major wars (the Crimean and the American Civil War) before the lesson fully sank in. A century later, large-scale strategic bombing of civilian targets contributed to much of WWII's death toll, without bringing about the rapid capitulations it had been intended to produce.
Replies from: CronoDAS↑ comment by CronoDAS · 2013-02-09T08:42:25.828Z · LW(p) · GW(p)
Perhaps higher casualty rates lead to wars ending sooner? After all, wars do not end when they are won, but when those who want to fight to the death find their wish has been granted.
↑ comment by ygert · 2013-02-12T17:17:37.417Z · LW(p) · GW(p)
Well, one could argue that the biggest advance in military technology (nuclear weapons) vastly decreased the number of deaths in wars were it was involved. That is, far fewer people died from the Cold War then from World War II. So to that extent, the military technology actually changed the number of deaths down.
comment by boni_bo · 2013-02-08T09:28:55.078Z · LW(p) · GW(p)
In 1948 Norbert Wiener, in the book Cybernetics: Or the Control and Communication in the Animal and the Machine, said: "Prefrontal lobotomy... has recently been having a certain vogue, probably not unconnected with the fact that it makes the custodial care of many patients easier. Let me remark in passing that killing them makes their custodial care still easier."
Replies from: None↑ comment by [deleted] · 2013-02-08T12:47:24.800Z · LW(p) · GW(p)
Wiener had a well-calibrated moral compass, but still felt the need to address the religious aspects of machine learning.
comment by [deleted] · 2013-02-08T12:43:52.726Z · LW(p) · GW(p)
A good article, but one thing that sticks out of me is the overall ineffectiveness of these scientists at preventing the actual use of their technology. Only the recombinant DNA experiment was stopped before actually being carried out.
Replies from: PaulS, AntonioAdan↑ comment by PaulS · 2013-02-09T08:00:36.460Z · LW(p) · GW(p)
This may be partly because technologies that were used are more conspicuous. We would know if Napier designed a better cannon, but we don't know how much he delayed the development of artillery by concealing his results.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-09T08:09:56.784Z · LW(p) · GW(p)
Right, there's a survivorship bias. You're not going to hear about scientists who successfully prevented anyone from learning about their terrible discoveries (because to be really successful they'd also need to prevent anyone from learning that they'd prevented anyone from learning about their terrible discoveries).
↑ comment by AntonioAdan · 2013-02-14T21:48:01.775Z · LW(p) · GW(p)
Once they let the cat out of the bag this is true. Da Vinci understood how to keep a secret.
comment by Manfred · 2013-02-08T19:14:19.110Z · LW(p) · GW(p)
My nitpick is the vague and spooky description of Paul Berg's research. The surrounding tone is great, but this little bit ends up sounding bad.
Current:
Paul Berg (1926-), who carried out part of an experiment (like what?) which would, if completed, have created a potentially carcinogenic (vague) strain of a common gut bacteria (just say E. coli) which could have spread to human beings (Say what relevant people at the time thought, "could have" implies false danger). Due to the concerns of other scientists, he put the final part of the experiment on hold, and called for more attention to the risks of such research as well as a temporary moratorium.
Preferred:
Replies from: Kaj_SotalaPaul Berg (1926-), who in 1972 had already carried out the preparations for creating a strain of E. coli that contained the genome for a human-infectious virus (SV40) with tentative links to cancer. Robert Pollack (1920-) heard news of this experiment and helped convince Berg to halt it - both were concerned about the danger that this new strain would spread to humans in the lab and become a pathogen. Berg then became a major voice calling for more attention to the risks of such research as well as a temporary moratorium.
↑ comment by Kaj_Sotala · 2013-02-09T10:48:38.265Z · LW(p) · GW(p)
Thanks, that's indeed better. I've replaced it with your version. (The original was vague probably because there were several conflicting accounts of what exactly happened, with e.g. different sources putting the time of the experiments to 1971, 1973 and 1974, and then I got kinda frustrated with the thing and did the write-up pretty vaguely.)
Replies from: Manfredcomment by fela · 2013-02-09T18:47:15.843Z · LW(p) · GW(p)
Jared Diamond, in Guns Germs and Steel, argues that when the time is ripe scientific discoveries are made quite regardless of who makes them, give or take a few decades. Most discoveries are incremental, and many are made by multiple people simultaneously. So wouldn't a discovery that isn't published be just made elsewhere in a few years time, possibly by someone without many ethical concerns?
Replies from: adam_strandberg, Eliezer_Yudkowsky, lukeprog, ricketson↑ comment by adam_strandberg · 2013-02-10T18:15:10.139Z · LW(p) · GW(p)
Even a few years of delay can make a big difference if you are in the middle of a major war. If Galston hadn't published his results and they weren't found until a decade or two later, the US probably wouldn't have used Agent Orange in Vietnam. Similarly with chlorine gas in WWI, atomic bombs in WWII, etc. Granted, delaying the invention doesn't necessarily make the overall outcome better. If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.
Replies from: army1987, Desrtopa↑ comment by A1987dM (army1987) · 2013-02-11T17:16:19.528Z · LW(p) · GW(p)
If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.
Huh. I had never thought about that from that angle.
↑ comment by Desrtopa · 2013-02-10T20:05:50.835Z · LW(p) · GW(p)
For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.
When wars are fought until a decisive victory, a huge technological edge may serve to decrease the death toll, as the side at a disadvantage will be more easily persuaded to give up.
Replies from: None, gwern, CCC↑ comment by [deleted] · 2013-02-11T02:29:40.034Z · LW(p) · GW(p)
For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.
This is commonly taught in US schools, but you should be aware that the claim has some serious flaws: http://en.wikipedia.org/wiki/Debate_over_the_atomic_bombings_of_Hiroshima_and_Nagasaki#Militarily_unnecessary
Replies from: Desrtopa↑ comment by Desrtopa · 2013-02-11T05:18:18.222Z · LW(p) · GW(p)
Gwern already linked to the same page previously. I've updated on the information, however, in my time at school I also did a research project on the atom bombing, and the sources I read for the project (which are not online, at least as far as I know,) cited Japanese military officials who were of the opinion that their country would have continued to resist, even to the point of a land invasion, and that the bombings were instrumental in changing that.
There are certainly good reasons to suspect that Japan might have surrendered soon under the same terms even without the dropping of the bombs, but it's also not as if there is a dearth of evidence suggesting that the bombings were a significant factor.
↑ comment by gwern · 2013-02-10T23:15:31.063Z · LW(p) · GW(p)
For that matter, if we didn't use the atom bombs in Hiroshima and Nagasaki, then we would have gone ahead with the land invasion, resulting in far more fatalities.
You know this interpretation is massively debated and criticized due to the Russian declaration of war and internal Japanese deliberations: http://en.wikipedia.org/wiki/Debate_over_the_atomic_bombings_of_Hiroshima_and_Nagasaki#Militarily_unnecessary http://en.wikipedia.org/wiki/Atomic_bombings_of_Hiroshima_and_Nagasaki#Surrender_of_Japan_and_subsequent_occupation
Replies from: Desrtopa↑ comment by Desrtopa · 2013-02-10T23:28:29.652Z · LW(p) · GW(p)
It's true that Japan was already willing to surrender, and perhaps this should have been a sufficient goal for the U.S. forces, but there was still a great degree of resistance to the prospect of unconditional surrender. For better or for worse, the U.S. was unsatisfied with the terms of surrender the Japanese were willing to accept prior to the Hiroshima and Nagasaki attacks, and were planning to pursue further measures until they achieved unconditional surrender.
Even if America did not resort to land invasion, months more of firebombing would most likely have resulted in a greater number of fatalities than the use of the atom bombs.
Replies from: gwern↑ comment by gwern · 2013-02-11T00:11:15.904Z · LW(p) · GW(p)
The terms are irrelevant, because the US did not get an unconditional surrender in your all-embracing sense. It got a capitulation with the understanding that the Emperor was not threatened (which was indeed subsequently the case), which makes sense once you understand that the 'unconditional surrender' in the Potsdam Declaration was only about the military forces:
"We call upon the government of Japan to proclaim now the unconditional surrender of all Japanese armed forces, and to provide proper and adequate assurances of their good faith in such action. The alternative for Japan is prompt and utter destruction."
The question is why the Japanese government abandoned its previous insistence on a general admission of defeat with 4 conditions and settled for just 1 condition which was acceptable to the US since it was not a military condition. And the reason for the dropping seems to have in large part been the sudden shock of negotiations with Russia failing and it dropping neutrality and starting its invasion. Even despite its almost immediate surrender to the US, Japan still lost Sakhalin.
(I'd note that we might expect claims about the necessity of the bombings to be overblown for at least 2 reasons: first and most obviously, it is important so as to justify the murder of hundreds of thousands of civilians in those bombings and other ongoing campaigns despite US government awareness of Japan's ongoing surrender overtures and that Russia would switch its attention to the Japanese front soon with what were probably at the time predictable consequences, and secondly, it is a useful claim in minimizing credit for the Russian contribution to WWII, a phenomenon already acknowledged about most US treatments of the European theater's eastern front.)
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-10T16:44:13.550Z · LW(p) · GW(p)
What a good thing for all of us that Leo Szilard did not make this mistake.
↑ comment by lukeprog · 2013-02-10T04:10:19.323Z · LW(p) · GW(p)
Maybe.
I'm not an expert on the history of science, but it seems to me like:
- Lots of psychology could have been done decades or maybe a century earlier, but nobody bothered until the mid-20th century.
- If Einstein hadn't figured out General Relativity, it might have been another 15-25 years before somebody else figured it out.
- On the other hand, things like computers and Bayes nets and the structure of DNA wouldn't have taken much longer to discover if their actual discoverers hadn't been on the case for whatever reason.
↑ comment by ricketson · 2013-02-09T20:26:58.254Z · LW(p) · GW(p)
Especially in the modern environment with many thousands of scientists, there won't be much delay caused by a few scientists witholding their results. The greatest risk is that the discovery is made by someone who will keep it secret in order to increase their own power.
There is also a risk that keeping secrets will breed mistrust, even if the secret is kept without evil intent.
comment by CCC · 2013-02-08T13:21:54.125Z · LW(p) · GW(p)
Asimov once wrote a short story - "A Feeling of Power" - on the subject of misusing technology for evil ends. In the story, set in the far future, a man rediscovers basic mathematics (without a computer) - possibly the most innocent of possible advances - and, once he sees what horrors this invention will lead to, kills himself.
A lot of technology (I'd even say most technology) is two-pronged - it can be used for good or evil. Nuclear power or nuclear bombs. Creating disease or destroying disease. The products of technology are not immune to this; the Archimedean screw can irrigate a field or flood it. Dynamite can be used in excavation or as a weapon.
So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course). This is mainly because our current society rests on the public research and discoveries of countless people throughout history; I would prefer that future societies should be even better than our current society, and the best way that I see to ensure that is by giving future societies a greater base of knowledge to draw from.
Replies from: Eugine_Nier, Kawoomba↑ comment by Eugine_Nier · 2013-02-09T00:34:26.750Z · LW(p) · GW(p)
So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course).
The idea of destroying your own research to stop progress seems to assume that no one else can do the same experiment.
Replies from: CCC↑ comment by CCC · 2013-02-09T19:00:58.067Z · LW(p) · GW(p)
It could merely be that the scientist knows that his research in particular is being watched by men who will immediately misuse it when they can; allowing some random person to re-run the experiment is not a problem, assuming that the random person is not being watched in particular.
It could be that the experimental work is complex enough, and the expected returns unexpected enough, that the scientist has good reason to think that it will be a decade or more until the experiment is re-done - by which point one may hope that the political/social landscape may have changed enough to put less emphasis on evil uses (e.g. a major war may have ended in the interim). (Note that, in the case of one particular theory - continental drift - it was suggested under that name as far back as 1912 - and the idea of the continents moving was proposed as early as 1596 - but was still not accepted in the 1940s).
These assumptions are both a good deal weaker than the one you suggest, but I don't think they're unreasonable.
↑ comment by Kawoomba · 2013-02-08T15:22:25.612Z · LW(p) · GW(p)
So, while the ethical scientist should of course evaluate each situation on its merits and take care to ensure that safety protocols are followed (as in the recombitant DNA example in the article), and should try to encourage the beneficial uses of technology, I don't think that destroying one's own research is a good general way to accomplish this. (There are specific cases where it might be necessary, of course).
Good luck with destroying your research and getting away with it. Unless you bring your own particle accelerator (BYOPA), your own lab, are not beholden to corporate interests for your livelihood, not subject to frequent progress updates on how you spend your grant money, (etc.) Oh, and hopefully you persuade your research group to go along with you, so that when you face legal charges for breaking your contract, at least it wasn't for nothing.
Replies from: Manfred↑ comment by Manfred · 2013-02-08T18:25:55.559Z · LW(p) · GW(p)
Charitably, "destroying your research" should refer to nullifying the effort that you put into advancing a field, not actually (and merely) throwing away your samples in an obvious manner.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-02-08T18:28:44.364Z · LW(p) · GW(p)
How would you go about doing that?
(Also, my previous comment agreed with its parent, and was just pointing out the practical infeasibility of following through with such a course of action.)
Replies from: CCC, Manfred↑ comment by CCC · 2013-02-08T20:45:53.750Z · LW(p) · GW(p)
There are several ways to nullify, or even reverse progress:
- Falsify some hard-to-duplicate results in a way that calls previous results into doubt
- Subtly sabotage one or more experiments that will be witnessed by others
- Enthusiastically pursue some different avenue of research, persuading others to follow you
- Leave research entirely, taking up a post as an undergraduate physics lecturer at some handy university
There would have to be extremely good reason to try one of the top two; since they involve not only removing results, but actually poison the well for future researchers.
Replies from: roystgnr, oooo, sanyasi↑ comment by roystgnr · 2013-02-09T07:53:40.079Z · LW(p) · GW(p)
Casting doubt on a research track is probably easier said than done, no? To use a ridiculous hypothetical example: "Cold fusion" has been the punchline of jokes to 99.9% of scientists ever since the 1989 experiment garnered a ton of publicity without an ounce of replicability, yet Wikipedia suggests that the remaining 0.1% decades later still includes a few serious research teams and a few million dollars of funding. If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn't fully paid off.
The fact that I had to resort to a ridiculous hypothetical example there shows an unavoidable problem with this article, by the way: no history of successful ethical concern about scientific publication can exist, since almost by definition any success won't make it into history. All we get to hear about is unconcern and failed concern.
Replies from: CCC↑ comment by CCC · 2013-02-09T19:03:24.484Z · LW(p) · GW(p)
If Pons & Fleischmann were secretly trying to steer the world away from some real results by discrediting the field with embarrassing false results, it seems like a very risky gamble that still hasn't fully paid off.
Of course, no-one has found any dangerous results; so if that's what they were trying to hide, perhaps by leaving a false trail, then they've succeeded admirably, sending future researchers up the wrong path.
Replies from: roystgnr↑ comment by roystgnr · 2013-02-10T06:59:59.841Z · LW(p) · GW(p)
In real life, I'm pretty sure that nobody has found any dangerous results because there aren't any dangerous results to find. This doesn't mean that creating scandals successfully reduces the amount of scientific interest in a topic, it just means that in this case there wasn't anything to be interested in.
↑ comment by oooo · 2013-07-09T00:31:49.735Z · LW(p) · GW(p)
Enthusiastically pursue some different avenue of research, persuading others to follow you
I am reading Kaj Sotala's latest paper "Responses to Catastrophic AGI Risk: A Survey" and I was struck by this thread regarding ethically concerned scientists. MIRI is following this option by enthusiastically pursuing FAI (slightly different avenue of research) and trying to persuade and convince others to do the same.
EDIT: My apologies -- I removed the second part of my comment proactively because it dealt with hypothetical violence of radical ethically motivated scientists.
↑ comment by sanyasi · 2013-02-09T10:57:51.646Z · LW(p) · GW(p)
It's debatable whether Heisenberg did the former, causing the mistaken experiment results that led the Nazi atomic program to conclude that a bomb wasn't viable. See http://en.wikipedia.org/wiki/Copenhagen_(play) for scientific entertainment (there's a good BBC movie about this starring Daniel Craig as Werner Heisenberg)
↑ comment by Manfred · 2013-02-08T20:46:11.204Z · LW(p) · GW(p)
Suppose we're in a bad-case modern scenario, where there's been close industry involvement, including us documenting early parts of the experiment, as well as some attention in the professional literature, and some researchers poised to follow up on our results. And then we directly discover something that would be catastrophic if used, so we have to keep it in as few peoples' hands as possible, we can't just be like Paul Berg and write an article asking for a moratorium on research. Let's say it's self-replicating nanotechnology or something.
One process you could follow is sort of like getting off facebook. Step one is to obfuscate what you've already done. Step two is to discredit your product. Step three is to think up a non-dangerous alternative. Step four is to start warning about the dangers.
In the case of nanotech, this would mean releasing disinformation in our technical reports for a while, then claiming contamination or instant failure of the samples, with e.g. real data cherry picked from real failures to back it up, then pushing industrial nanotech for protein processing using our own manufactured failure as a supporting argument, then talking to other researchers about the danger of self-replicating nanotech research.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-02-08T21:26:10.637Z · LW(p) · GW(p)
Your bad-case modern scenario seems more like the average to me (extent depending on the field). Most research that promises breakthroughs requires a lot of funding these days, which implies either close industry involvement or being part of some government sponsored project. Which both imply close supervision and teams of researchers, no Dr. Perelman type one-man-show. Even if there's no corporate/academic supervisor pestering you, if you want to do (default:expensive) research, you and your team better publish, or perish, as the aphorism goes.
Note I did not suggest just throwing away samples, both falsifying your reports / releasing disinformation opens you up to legal liabilities, damages, pariah status, and depends on convincing your research group as well. Unless you envision yourself as the team leader, in which case it's unlikely you'll be the first to notice the danger, and in which case you'll probably be self selected for being enthusiastic about what you do.
Take nanotech, say you start thinking that your current project may open the door to self-replicators. Well, most any nanotech related research paves part of the way there, whether a large or a small chunk. So stop altogether? But you went into the field willingly (presumably), so it's not like you're strictly against any progress that could be dual-used for self-replicators.
What I'm getting at is a researcher a) noticing the dangerous implications of his current research and then b) devoting himself to stopping it effectively and c) those efforts having a significant effect on the outcome is a contrived scenario in almost any scenario that doesn't seem Chinese-Room like concocted.
Maybe it's selection bias from the scientific news cycle, but unless there is a large "dark figure" of secret one-man researcher hermits like Perelman for whom your techniques may potentially work, there's little stopping the (hopefully hypothetical) doomsday clock.
Replies from: CCCcomment by Luke_A_Somers · 2013-02-08T14:51:19.412Z · LW(p) · GW(p)
Stylistic note - you use the Leonardo da Vinci submarine example twice in three paragraphs without acknowledgement that you just used it.
comment by Qiaochu_Yuan · 2013-02-09T00:38:08.623Z · LW(p) · GW(p)
Thanks for writing this! I like being able to share LW material with my friends that doesn't trigger philosophical landmines.
comment by Jonii · 2013-02-13T23:53:52.907Z · LW(p) · GW(p)
My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.
comment by Izeinwinter · 2013-02-12T14:08:26.346Z · LW(p) · GW(p)
You are missing a major problem. Not "secrecy will kill progress" That is, in this context, a lesser problem. The major problem is that scientific secrecy would eventually kill the planet.
In a context of ongoing research and use of any discipline, Dangerous techniques must be published, or they will be duplicated over and over again, until they cause major damage. If the toxicity of Dimethylmercury was a secret, chemical laboratories and entire college campuses dying slowly, horrifically and painfully would be regular occurrences. No scientific work is done without a context, and so all discoveries will happen again. If you do not flag any landmines you spot , someone not-quite-as-sharp will eventually reach the same territory and step on them. If you find a technique you consider a threat to the world, it is now your problem to deal with, and secrecy is never going to be a sufficient response, but is instead merely an abdication of moral responsibility onto the next person to get there.
Replies from: ikrase, Troshen↑ comment by ikrase · 2013-02-12T15:03:01.956Z · LW(p) · GW(p)
My impression of this post was not that it made a focused argument in favor of secrecy specifically.
Replies from: ewbrownv↑ comment by ewbrownv · 2013-02-12T22:49:22.236Z · LW(p) · GW(p)
It's a recitation of arguments and anecdotes in favor of secrecy, so of course it's an argument in that direction. If that wasn't the intention there would also have been anti-secrecy arguments and anecdotes.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-02-13T06:08:48.428Z · LW(p) · GW(p)
See this comment.
Replies from: ikrase↑ comment by Troshen · 2013-02-25T22:16:12.444Z · LW(p) · GW(p)
This is an extremely important point. Historically it might take a long time, if ever, for someone else to come to a similar discovery that you just made. For example, Leonardo's submarines. But that was when only a tiny fraction of humanity devoted time to experiments. His decision to hide his invention kicked the can of secret attacks by submarines many years down the road and may have saved many lives. (I'm not so sure - leaders who wanted wars I'm sure found other secret plots and strategems, but at least he exercised his agency to not be the father of them)
But things are different now. You can be practically guaranteed that if you are working on something, someone else in the world is working on it too, or will be soon. Being at a certain place and time in your industry puts you in a position to see the possible next steps, and you aren't alone.
If you see something dangerous that others don't, the best bet is to talk about it. More minds thinking and talking about it from multiple different perspectives have the best chance to solve it.
Communication is a great, helpful key to survival. I think we had it when the U.S. and the Soviets didn't annihilate the world when the U.S. policy was Mutual Assured Destruction. And I think we didn't have it in the U.S. Civil War and in WWI, when combat technology had raced ahead of the knowledge and training of the generals of those wars, and that led to shocking massacres unintended by either side.
An example other than unfriendly AI is asteroid mining and serious space travel in general. Right now we have the dangers from asteroids. But the ability to controllably move mass in orbit would inevitably become one of the most powerful weapons ever seen. Unless people make a conscious choice not to use it for that. Although I've wanted to write fiction stories about it and work on it, I've actually hesitated for the simple fact that I think it's inevitable that it will become a weapon.
This post makes me confident. The action most likely to lead to humanity's growth and survival is to talk about it openly. First because we're already vulnerable to asteroids and can't do anything about it. And second because talking about it raises awareness of the problem so that more people can focus on solving it.
I really think that avoiding nuclear war is an example. When I was a teenager everyone just assumed we'd all die in a nuclear war someday. Eventually through a deliberate war or an accident or a skynet-style-terminator incident civilization as a whole would be gone. And eventually that fear just evaporated. I think it's because we as a culture kept talking about it so much and not leaving it up to only a few monarchic leaders.
So I'm changing my outlook and plans based on this post and this comment. I plan to talk about and promote asteroid mining and write short stories about terrorists dropping asteroids on cities. To talk about it it is better in the long run.
Replies from: Vladimir_Nesov, Izeinwinter↑ comment by Vladimir_Nesov · 2013-02-25T22:23:46.608Z · LW(p) · GW(p)
but at least he exercised his agency to not be the father of them
This distinction doesn't seem important.
↑ comment by Izeinwinter · 2013-02-25T22:55:33.770Z · LW(p) · GW(p)
, I have given some thought to this specific problem - not just asteroids, but the fact that any spaceship is potentially a weapon, and as working conditions go, extended isolation does not have the best of records on the mental stability front.
Likely solutions: Full automation and one-time-pad locked command and control - This renders it a weapon as well controlled as nuclear arsenals, except with longer lead times on any strike, so even safer from a MAD perspective. (... and no fully private actor ever gets to run them. ) Or if full automation is not workable, a good deal of effort expended on maintaining crew sanity - Psyc/political officers - called something nice, fluffy, and utterly anodyne to make people forget just how much authority they have, backed up with a remote controlled self destruct. Again, one time pad com lock. It's not going to be a libertarian free for-all as industries go, more a case of "Extremely well paid, to make up for the conditions and the sword that will take your head if you crack under the pressure" Good story potential in that, though.
Replies from: Troshen↑ comment by Troshen · 2013-02-25T23:43:36.957Z · LW(p) · GW(p)
I think we're heading off-topic with this one, and I'd like to continue the discussion and focus it on space, not just whether to reveal or keep secrets.
So I started this thread: http://lesswrong.com/r/discussion/lw/gsv/asteroids_and_spaceships_are_kinetic_bombs_and/
comment by Decius · 2013-02-09T23:49:06.393Z · LW(p) · GW(p)
Better than developing ethical scientists would be a policy of developing ethical political and military leaders.
Replies from: Nebu↑ comment by Nebu · 2013-02-15T16:13:36.526Z · LW(p) · GW(p)
Better for whom? I'd really like my rival countries to have ethical military leaders, but maybe I prefer my own country's military leaders to be ruthless.
Replies from: CCC, BillyOblivion, Decius↑ comment by CCC · 2013-02-16T05:50:18.848Z · LW(p) · GW(p)
I would prefer my own country's miilitary leaders to be ethical as well, personally. A ruthless military leader may:
- Attempt to overthrow the government
- Declare war on a nearby country that he thinks he can defeat
- Subvert military supply lines in order to unethically increase his personal wealth
...all of which are behaviours I do not prefer.
↑ comment by BillyOblivion · 2013-02-20T07:04:08.607Z · LW(p) · GW(p)
Is ruthlessness necessarily unethical in a military leader?
Sometimes compassion is a sharp sword.
Replies from: Bugmaster↑ comment by Decius · 2013-02-15T23:46:19.656Z · LW(p) · GW(p)
Do you defect in iterated prisoners' dilemma?
Replies from: Nebu↑ comment by Nebu · 2013-02-16T03:42:32.568Z · LW(p) · GW(p)
No, but I'm not sure military conflicts are necessarily iterated, especially from the perspective of me, an individual civilian within a nation.
Replies from: Decius↑ comment by Decius · 2013-02-16T07:42:05.618Z · LW(p) · GW(p)
But the selection of military leaders is iterated.
Replies from: Jayson_Virissimo, Nebu↑ comment by Jayson_Virissimo · 2013-02-23T13:35:47.914Z · LW(p) · GW(p)
But the selection of military leaders is iterated.
Most of us are not in a position to ever select a military leader, let alone do it an indefinite number of times.
Replies from: Decius↑ comment by Decius · 2013-02-23T22:58:08.344Z · LW(p) · GW(p)
Most adult US citizens are in a position to have nonzero input into the selection of the person who determines military policy.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2014-07-24T20:52:31.213Z · LW(p) · GW(p)
Sure, and most adult US citizens are in a position to have nonzero input into the selection of today's weather by choosing to open their door with the AC on or not. Nonzero is a very small hurdle indeed.
Replies from: Decius↑ comment by Nebu · 2013-02-19T18:33:14.289Z · LW(p) · GW(p)
I'm afraid I don't see the relevance.
Replies from: Decius↑ comment by Decius · 2013-02-20T02:47:12.439Z · LW(p) · GW(p)
I think the payoff matrix of warfare is very analogous to the PD payoff matrix, and that the previous (and even current) military leaders are available to all serious players of the game. Also, anticipate that others might make irrational decisions, like responding to a WMD attack with a WMD reprisal even if it doesn't benefit them; they might also make rational decisions, like publicly and credibly precommitting to a WMD reprisal in the even of a WMD attack.
Replies from: Nebu↑ comment by Nebu · 2013-03-06T14:57:33.759Z · LW(p) · GW(p)
I'm still not following you.
So first of all, you'll need to convince me that the payoff matrix for an individual civilian within a nation deciding who their military leader should be is similar to one of the prisoners in PD. In particular, we'll need to look at what "cooperate" and "defect" even mean for the individual citizen. E.g. does "cooperate" mean "elect an ethical military leader"?
Second, asuming you do convince me that the payoff matrices are similar, you'll have to clarify whether you think warfare is iterated for an individual civilian, especially when the "other" nation defects. I suspect if my leader is ethical, and their leader is not, then I will be dead, and hence no iteration for me.
Thirdly, you may wish to clarify whether all the sentences after your first are intended to be new assertions, or if they are supposed to be supporting arguments for the first sentence.
Replies from: Decius↑ comment by Decius · 2013-03-07T05:56:36.947Z · LW(p) · GW(p)
Vastly simplified:
Survival is worth three points, destroying the opposing ideology is worth two points, and having at least one survivor is worth twenty points.
If nobody uses WMDs, everyone gets 23 points. If one side uses WMDs, they survive and destroy their idealogical opponent for 25 points to the opposing 20. If both sides use WMDs, both score 2 for destroying the opponent.
Given that conflicts will happen, a leader who refuses to initiate use of WMDs while convincing the opponent that he will retaliate with them is most likely to result in the dual-cooperate outcome. Therefore the optimum choice for the organism which selects the military leaders is to select leaders who are crazy enough to nuke them back, but not crazy enough to launch first.
If you share the relative ranking above (not-extinction>>surviving>wiping out others), then your personal maximum comes from causing such a leader to be elected (not counting unrelated effects on e.g. domestic policy). The cheapest way of influencing that is by voting for such a leader.
Replies from: Nebu↑ comment by Nebu · 2013-04-14T15:10:53.853Z · LW(p) · GW(p)
What's the difference between "Survival" and "having at least one survivor"?
The way I see it:
- If I'm dead, 0 points.
- If I'm alive, but my city got nuked, so it's like a nuclear wasteland, 1 point.
- If I'm alive, and living via normal north american standards, 2 points.
We're assuming a conflict is about to happen, I guess, or else the hypothetical scenario is boring and there are no important choices for me to make.
The question is not "Do I elect a crazy leader or a non-crazy leader?", but rather, "Do I elect a leader that's believes 'all's fair in love and war?' or a leader that believes in 'always keep your word and die with honor'?"
I.e. if you think "ethical vs unethical" means "will retaliate-but-not-initiate vs will not retaliate-but-not-intiiate", then it's no wonder why we're having communication problems.
Replies from: Decius↑ comment by Decius · 2013-04-14T21:26:51.254Z · LW(p) · GW(p)
"Having at least one survivor" means that humanity exists at the end of the game. "Surviving" means that your country exists at the end of the game.
I sidestepped 'ethical' entirely in favor of 'practical'. I also had to address this question in a manner not nearly as hypothetical or low-stakes as this.
Replies from: Nebucomment by Kaj_Sotala · 2013-02-13T06:08:17.276Z · LW(p) · GW(p)
Since "this post is arguing for secrecy in particular being a good thing" seems to be a common misunderstanding of the intent of the post, I deleted the mention of hiding one' work from the opening paragraph, as well as added a paragraph explicitly saying that we're not saying that any particular way of taking responsibility is necessarily the correct one.
Replies from: tadamsmar↑ comment by tadamsmar · 2014-01-16T17:18:36.687Z · LW(p) · GW(p)
As you point out, Szilard took steps to keep his nuclear chain-reaction patent secret from the Germans. He later took steps that led the US government to start preventing the open publication of scientific papers on nuclear reactor design and other related topics. (The Germans noticed when the journals went quiet.)
Right after Hiroshima and Nagasaki, he thought the US government was putting out too much public information on the A-bomb. He even thought the Einstein-Szilard letters should remain secret. His idea at the time was the US government should reveal almost nothing and use the promise to reveal as a bargaining chip in an effort to get an international agreement for the control of nuclear weapons.
Szilard's secrecy about the neutron chain-reaction made it hard for him to get anyone to help him work on making nuclear energy practical between 1934 and 1940. So, it arguably slowed down everyone, not just the Germans.
Source is the Szilard biography "Genius in the Shadows"
comment by [deleted] · 2013-02-12T21:24:18.407Z · LW(p) · GW(p)
A scientist who shares a potentially harmful invention with the rest of the world might not necessarily lack ethical concern. If I invented the knife, I could choose to think that sharing it with others would increase the probability of random stabbings and accidental self-inflicted injury (very bad), or I could choose to focus on the fact that it would be an extremely useful tool in everyday life (very good).
comment by lukeprog · 2013-02-11T09:47:20.298Z · LW(p) · GW(p)
Ron Arkin might also belong on the list.
From Robots at War: Scholars Debate the Ethical Issues:
“I was very enthralled with the thrill of discovery and the drive for research and not as much paying attention to the consequences of, ‘If we answer these questions, what’s going to happen?’” [roboticist Ronald Arkin] says. What was going to happen soon became apparent: Robotics started moving out of the labs and into the military-industrial complex, and Mr. Arkin began to worry that the systems could eventually be retooled as weaponized “killing machines fully capable of taking human life, perhaps indiscriminately.”
Arkin went on to write one of the better works of "mainstream machine ethics".
comment by Michael Wiebe (Macaulay) · 2013-02-13T00:17:57.815Z · LW(p) · GW(p)
On a related topic, Pinker has a very useful discussion of the case for and against open discussion of dangerous (non-technological) ideas. (Mindkiller warning)
comment by V_V · 2013-02-09T16:14:29.843Z · LW(p) · GW(p)
Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
The reasoning is that if you discover something which could have potentially harmful applications, it's better that there is public discussion about it rather than it becoming a toy in the hands of corporations or government agencies.
If you conceal or halt your research, somebody else is going to repeat the same discovery soon. If all ethically concerned scientists stop pursuing some line of research, then non-ethically concerned scientists will be the only ones doing it.
As for conducting dangerous research in secret, you will not be able to prevent leaks, and the chances that you screw up something are much higher if you act without public oversight. Moreover, it is unethical for you to do experiments that potentially put other people at risk without their informed consent.
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:
If you are a legitimate research organization ethically concerned by AI safety, the best way to achieve your goals is to publish and disseminate your research as much as possible, in particular to people who may be building AIs.
Because, let's face it, if AGI is technically feasible, you will not be the first ones to build one, and even if by some absurdly improbable coincidence you were, the chances that you get it right while working in secrecy are negligible.
Of course, in order to publish research, you must first be able to do research worth publishing. As I said before, for the SI this would be the "flour on the invisible dragon" test.
Replies from: Vladimir_Nesov, asparisi, ricketson, Kaj_Sotala, army1987, ygert↑ comment by Vladimir_Nesov · 2013-02-10T02:21:21.128Z · LW(p) · GW(p)
the best way to achieve your goals is to publish and disseminate your research as much as possible
This is an important question, and simply asserting that the answer to it is one way or the other is not helpful for understanding the question better.
Replies from: V_V↑ comment by asparisi · 2013-02-10T21:48:35.417Z · LW(p) · GW(p)
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or parts of it) in the hope of creating a substantive dialogue about risks. If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret. This probably also varies by field with respect to how many competing paradigms are available and how incremental the research is: psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing, so it is less likely that a particular piece of research will be duplicated while biologists tend to have larger agreement and their work tends to be more incremental, making it more likely that a particular piece of research will be duplicated.
Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.
Replies from: V_V, Troshen↑ comment by V_V · 2013-02-11T00:20:05.136Z · LW(p) · GW(p)
I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.
Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn't have wanted to let the Nazis know what you were doing, and with good reason.
But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it.
If the investment is low, it is less likely that anyone will come up with the same discovery and so you may want to keep it a secret.
How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest?
psychologists work with a lot of different theories of the mind, many of which do not explicitly endorse incremental theorizing
Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful?
Honestly, I find cases of alternative pleading such as V_V's post here suspect. It is a great rhetorical tool, but reality isn't such that alternative pleading actually can map onto the state of the world. "X won't work, you shouldn't do X in cases where it does work, and even if you think you should do X, it won't turn out as well" is a good way to persuade a lot of different people, but it can't actually map onto anything.
That statement seems contrived, I suppose that by "can map onto the state of the world" you mean "is logically consistent".
Of course, I didn't make that logically inconsistent claim. My claim is that "X probably won't work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken".
↑ comment by ricketson · 2013-02-09T20:38:45.767Z · LW(p) · GW(p)
Good points, but it was inappropriate to question the author's motives and the attacks on the SI were off-topic.
Replies from: V_V↑ comment by V_V · 2013-02-10T01:30:10.475Z · LW(p) · GW(p)
I didn't claim that his praise of scientific secrecy was questionable because of his motives (that would have been an ad hominem circumstantial ) or that his claims were dishonest because of his motives.
I claimed that his praise of scientific secrecy was questionable for the points I mentioned, AND, that I could likely see where it was coming from.
the attacks on the SI were off-topic.
Well, he specifically mentioned the SI mission, complete with a link to the SI homepage. Anyway, that wasn't an attack, it was a (critical) suggestion.
↑ comment by Kaj_Sotala · 2013-02-09T17:41:43.546Z · LW(p) · GW(p)
That's a rather uncharitable reading.
Replies from: V_V↑ comment by V_V · 2013-02-10T01:39:22.913Z · LW(p) · GW(p)
Possibly, but I try to care about being accurate, even if that means not being nice.
Do you think there are errors in my reading?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-02-10T07:11:34.198Z · LW(p) · GW(p)
I guess you are writing this because your emplyer the Singularity Institute (or whatever they are called now) use the "secret dangerous knowledge" excuse to handwave its conspicuous lack of published research. But seriously, that's not the right way of doing it:
Your criticism would be more reasonable if this post had only given examples of scientists who hid their research, and said only that everyone should consider hiding their research. But while the possibility of keeping your secret was certainly brought up and mentioned as a possibility, the overall message of the post was one of general responsibility and engagement with the results of your work, as opposed to a single-minded focus on just doing interesting research and damn the consequences.
Some of the profiled scientists did hide or destroy their research, but others actively turned their efforts into various ways by which the negative effects of that technology could be reduced, be it by studying the causes of war, campaigning against the use of a specific technology, refocusing to seek ways by which their previous research could be applied to medicine, setting up organizations for reducing the risk of war, talking about the dangers of the technology, calling for temporary moratoriums and helping develop voluntary guidelines for the research, or financing technologies that could help reduce general instability.
Applied to the topic of AI, the general message does not become "keep all of your research secret!" but rather "consider the consequences of your work and do what you feel is best for helping ensure that things do not turn out to be bad, which could include keeping things secret but could also mean things like focusing on the kinds of AI architectures that seem the most safe, seeking out reasonable regulatory guidelines, communicating with other scientists on any particular risks that your research has uncovered, etc." That's what the conclusion of the article said, too: "Hopefully, the examples provided in this post can encourage more researchers to consider the broader consequences of their work."
The issue of whether some research should be published or kept secret is still an open question, and this post does not attempt to suggest an answer either way, other than to suggest that keeping research secret might be something worth considering, sometimes, maybe.
Replies from: V_V↑ comment by V_V · 2013-02-10T12:13:20.071Z · LW(p) · GW(p)
Thanks for the clarification.
However, if you are not specifically endorsing scientific secrecy, but just ethics in conducting science, then your opening paragraph seems a bit of a strawman:
Today, the general attitude towards scientific discovery is that all research should be shared and disseminated as widely as possible, and that scientists are not themselves responsible for how their work is used. And for someone who is interested in science for its own sake, or even for someone who mostly considers research to be a way to pay the bills, this is a tempting attitude. It would be easy to only focus on one’s work, and leave it up to others to decide what to do with it.
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
Replies from: timtyler↑ comment by timtyler · 2013-02-11T02:06:34.022Z · LW(p) · GW(p)
Seriously, who is claiming that scientists should not take ethics into consideration while they do research?
It's more that humans specialise. Scientist and moral philosopher aren't always the same person.
Replies from: whowhowho, V_V↑ comment by whowhowho · 2013-02-11T12:03:24.117Z · LW(p) · GW(p)
OTOH, you don't get let off moral responsibility just because it isn't your job.
Replies from: timtyler↑ comment by timtyler · 2013-02-11T23:28:06.582Z · LW(p) · GW(p)
It's more that many of the ethical decisions - about what to study and what to do with the resulting knowledge - are taken out of your hands.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-12T01:27:03.594Z · LW(p) · GW(p)
Only they are not, because you are not forced to do a job just because you have invested in the training --however strange that may seem to Homo Economicus.
Replies from: timtyler↑ comment by timtyler · 2013-02-12T10:52:13.369Z · LW(p) · GW(p)
Resigning would probably not affect the subjects proposed for funding, the number of other candidates available to do the work, or the eventual outcome. If you are a scientist who is concerned with ethics there are probably lower-hanging fruit that don't involve putting yourself out of work.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-12T11:02:50.247Z · LW(p) · GW(p)
If those lower hanging fruit are things like choosing what to research, then those are not "taken out of your hands" as stated in the grandfather.
Replies from: timtyler↑ comment by timtyler · 2013-02-12T11:49:51.952Z · LW(p) · GW(p)
Some of those decisions are taken of scientists hands - since they are made by funding bodies. Scientists don't often get to study what they like, they are frequently constrained by what subjects receive funding. That is what I was referring to.
↑ comment by V_V · 2013-02-11T11:34:05.577Z · LW(p) · GW(p)
Moral philosophers hopefully aren't the only people who take ethics into account when deciding what to do.
Replies from: BerryPick6↑ comment by BerryPick6 · 2013-02-11T12:53:23.554Z · LW(p) · GW(p)
Some data suggests they make roughly the same ethical choices everyone else does.
↑ comment by A1987dM (army1987) · 2013-02-10T12:00:59.187Z · LW(p) · GW(p)
(or whatever they are called now)
http://lesswrong.com/r/discussion/lw/gis/singularity_institute_is_now_machine_intelligence/
↑ comment by ygert · 2013-02-09T16:54:50.760Z · LW(p) · GW(p)
I upvoted this, as it has some very good points about why the current general attitude is about scientific secrecy. I almost didn't though, as I do feel that the attitude in the last few paragraphs is unnecessarily confrontational. I feel you are mostly correct in saying what you said there, especially what you said in the second to last paragraph. But then the last paragraph kind of spoils it by being very confrontational and rather rude. I would not have had reservations about my upvote if you had simply left that paragraph off. As it is now, I almost didn't upvote it, as I have no wish to condone any sort of impoliteness.
Replies from: V_V↑ comment by V_V · 2013-02-09T17:11:53.783Z · LW(p) · GW(p)
Is your complaint about the tone of the last paragraphs, or about the content?
In case you are wondering, yes, I have a low opinion of the SI. I think it's unlikely that they are competent to achieve what they claim they want to achieve.
But my belief may be wrong, or may have been correct in the past but then made obsolete by the SI changing their nature.
While I don't think that AI safety is presently as a significant issue as they claim it is, I see that there is some value in doing some research on it, as long as the results are publicly disseminated.
So my last paragraphs may have been somewhat confrontational, but they were an honest attempt to give them the benefit of doubt and to suggest them a way to achieve their goals and prove my reservations wrong.
comment by lukeprog · 2013-04-21T01:19:00.040Z · LW(p) · GW(p)
was granted a patent for the atomic bomb in 1934
I think you mean "for the nuclear chain reaction."
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-04-21T07:13:24.474Z · LW(p) · GW(p)
Thanks, fixed.
comment by halcyon · 2013-04-11T11:09:45.530Z · LW(p) · GW(p)
I can tell I won't like Bill Joy's article. He can do what he wants to, but I don't see how "humanity" is a good argument against a robotic future. Isn't it a bit presumptuous to assume that all humans are content to remain human, assuming they even like being human all that much?
comment by VCM · 2013-03-13T06:51:05.128Z · LW(p) · GW(p)
Thanks, insightful post. I find the research a bit patchy. Only on the atomic bomb there is vast literature since the 1950ies, even in popular fiction - and a couple of crucial names like Oppenheimer (vs. Teller), the Russell–Einstein Manifesto or v. Weizsäcker are absent here.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-03-15T13:41:24.854Z · LW(p) · GW(p)
Thanks. The Russell-Einstein manifesto is mentioned in the post?
comment by Bugmaster · 2013-02-11T20:06:37.043Z · LW(p) · GW(p)
Even if keeping research secret in our modern world was feasible, I don't believe it would be desirable. I would argue that humanity has benefited tremendously from chemistry, modern physics, genetics, and informatics. The problem is that knowledge is amoral. The same knowledge that allows you to build a bomb also allows you to build a nuclear reactor. The same compiler that you use to create the latest communication protocols also allows you to create destructive computer viruses. There's no way of keeping one and discarding the other; and, on the whole, we are IMO better off with computers and other such things than we are without them.
comment by Maybe_a · 2013-02-09T07:17:37.252Z · LW(p) · GW(p)
Standing against unintended pandemics, atomic warfare and other extinction threatenting events have been quite good of an idea in retrospect. Those of us working of scientific advances shall indeed ponder the consequences.
But Immerwahr-Haber episode is just an unrelated tearjerker. Really, inventing process for creation of nitrogen fertilizers is so more useful than shooting oneself in the heart. Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.
Replies from: DanArmak↑ comment by DanArmak · 2013-02-09T12:28:33.997Z · LW(p) · GW(p)
Also, chemical warfare turned out not to kill much people since WWI, so such sacrifice is rather irrelevant.
That is rather begging the question. As a result of WW1 there have been agreements in place - the Geneva Protocol - not to develop or use chemical weapons, and so fewer people have been killed by them than might have otherwise.
Replies from: Maybe_a↑ comment by Maybe_a · 2013-02-09T16:59:52.007Z · LW(p) · GW(p)
Well, it seems somewhat unfair to judge the decision on information not available for decision-maker, however, I fail to see how is that an 'implicit premise'.
I didn't think Geneva convention was that old, and, actually updating on it makes Immerwahr decision score worse, due to lower expected amount of saved lives (through lower chance of having chemical weapons used).
Hopefully, roleplaying this update made me understand that in some value systems it's worth it. Most likely, E(\Delta victims to Haber's war efforts) > 1.
Replies from: DanArmak↑ comment by DanArmak · 2013-02-09T17:59:00.244Z · LW(p) · GW(p)
Here's what I meant by saying you were begging the question: you were assuming the outcome (few people would be killed by chemical warfare after WW1) did not depend on the protests against chemical weapons.
You said originally that protesting against chemical warfare (CW) during WW1 was not worth the sacrifice involved, because few people were killed by CW after WW1.
But the reason few people were killed is that CW was not used often. And one contributing factor to its not being used was that people had protested its use in WW1, and created the Geneva Convention.
People who protested CW achieved their goal in reducing the use of CW. So the fact CW was not used much and killed few people, is not evidence that the protest was in vain - to the contrary, it's exactly what you would expect to see if the protest was effective.
comment by MaoShan · 2013-02-09T06:51:10.817Z · LW(p) · GW(p)
Why are some of your links triggering scammish popups? Is it supposed to be some sort of humor?
Replies from: Kaj_Sotala, MaoShan, poiuyt↑ comment by Kaj_Sotala · 2013-02-09T07:17:27.899Z · LW(p) · GW(p)
They are? Which ones?
Replies from: MaoShan↑ comment by MaoShan · 2013-02-10T03:06:26.084Z · LW(p) · GW(p)
The word "pay" in paragraph 1, the word "details" in paragraph 5, and the word "money" in paragraph 7. It's possible that either my computer or the LW site has some very creative adware.
Replies from: Kawoomba, pjeby, Nornagest, fubarobfusco, None↑ comment by Nornagest · 2013-02-10T07:46:06.448Z · LW(p) · GW(p)
Like fubarubfusco says below, this is probably a malware issue. I saw something similar when a disk recovery program I didn't vet thoroughly enough infected me with a searchbar package that I'll leave nameless; MalwareBytes took care of most of it for me, though I had to do a little cleanup work myself.
It should probably be mentioned that most widespread antivirus packages won't catch this sort of thing; you need something that casts a broader net.
↑ comment by fubarobfusco · 2013-02-10T07:16:16.666Z · LW(p) · GW(p)
Your computer probably has a badware problem. If you are running Windows, try anti-spyware programs such as Spybot. Otherwise, check your browser proxy settings and browser extensions ....
Replies from: MaoShan↑ comment by MaoShan · 2013-02-11T00:54:24.577Z · LW(p) · GW(p)
I think it actually may have been an add-on that was intentionally (or just carelessly) installed into Firefox by another family member. I can shut it off myself. Seriously, who would download a program that explicitly promises more popups? (facepalm)
Replies from: CCC↑ comment by CCC · 2013-02-11T07:30:32.369Z · LW(p) · GW(p)
Seriously, who would download a program that explicitly promises more popups?
Depends how it's marketed. Or whether the person downloading it knew what they were downloading. Or even that they were downloading/installing something.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-11T19:41:55.764Z · LW(p) · GW(p)
Seriously, who would download a program that explicitly promises more popups?
(emphasis added)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-11T19:43:24.788Z · LW(p) · GW(p)
(Again, I should stop replying to comments without reading their ancestors first.)
↑ comment by poiuyt · 2013-02-09T07:07:32.646Z · LW(p) · GW(p)
I'm not seeing any popups?
Replies from: MaoShan↑ comment by MaoShan · 2013-02-10T03:09:27.019Z · LW(p) · GW(p)
Refer to the nested comment above for the details. So nobody else here has links on those words?
Replies from: poiuyt, Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-10T05:46:41.689Z · LW(p) · GW(p)
Nope. Just you, I'm afraid.
comment by bogus · 2013-02-08T11:18:00.013Z · LW(p) · GW(p)
Good article. Some of these concerns also apply to relatively "mundane" research, such as particle physics experiments. These experiments require huge amounts of resources that could be put to better use, they involve existential risks (such as creation of black holes or "strange matter") and they're often advocated out of a misguided sense that particle physics is inordinately important due to it being a "foundation ontology" for physical reality. This, even though drawing conclusions out of particle physics experiments is a highly non-trivial endeavour which is fraught with cognitive biases (witness the bogus "neutrinos are FTL" claim from a while ago) and arguably tells us very little about what determines the "high level" physical outcomes we actually care about.
You could also add some references to what religious and ethical leaders have said on the issue. The Roman Catholic pope John Paul II grappled with this issue in his encyclical letter Faith and reason, where he stated:
33 ... It is the nature of the human being to seek the truth. This search looks not only to the attainment of truths which are partial, empirical or scientific; nor is it only in individual acts of decision-making that people seek the true good. ... It must not be forgotten that reason too needs to be sustained in all its searching by trusting dialogue and sincere friendship. A climate of suspicion and distrust, which can beset speculative research, ignores the teaching of the ancient philosophers who proposed friendship as one of the most appropriate contexts for sound philosophical enquiry.
56 ... I cannot but encourage philosophers—be they Christian or not—to trust in the power of human reason and not to set themselves goals that are too modest in their philosophizing. The lesson of history in this millennium now drawing to a close shows that this is the path to follow: it is necessary not to abandon the passion for ultimate truth, the eagerness to search for it or the audacity to forge new paths in the search.
The Dalai Lama also made comparable statements, drawing on the Buddhist doctrine of dhamma vicaya, which posits self-knowledge (called ātman in this context) as the proper foundation of any scientific inquiry.
Edit: Why the downvotes? You don't have to like it, but the Roman Catholic Pope and the Dalai Lama are seen as ethical leaders and role models by many people. So what they say is quite important.
Replies from: Kaj_Sotala, Decius, ricketson↑ comment by Kaj_Sotala · 2013-02-08T16:23:14.567Z · LW(p) · GW(p)
I didn't downvote, but people might have done so since the focus of the article was on ethically concerned scientists, and the Pope and Dalai Lama aren't scientists.
comment by ricketson · 2013-02-09T20:23:40.108Z · LW(p) · GW(p)
Scientific discoveries are a form of information with great relevance to the public. Sharing such information is democratic; keeping such information secret is authoritarian. I propose that keeping scientific information secret has all the same ethical and practical problems as authoritarian/autocratic political regimes.
Scientists have to ask themselves two questions along these lines: 1) Do you trust humanity? 2) What does humanity need to understand?
I suggest that scientist research issues that are important for humanity and then share their findings, rather than researching things that are frivolous and then keeping secrets.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-10T00:41:11.058Z · LW(p) · GW(p)
Sharing such information is democratic; keeping such information secret is authoritarian. I propose that keeping scientific information secret has all the same ethical and practical problems as authoritarian/autocratic political regimes.
Why so deontological all of a sudden?