Open Thread, Aug. 1 - Aug 7. 2016
post by Elo · 2016-08-01T00:12:58.663Z · LW · GW · Legacy · 83 commentsContents
85 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
83 comments
Comments sorted by top scores.
comment by Elo · 2016-08-04T11:20:15.051Z · LW(p) · GW(p)
http://www.abc.net.au/triplej/programs/hack/hack-thursday/7674406
I had the opportunity to be on national radio talking about cryonics - my segment starts at 17:30. And the media will always cut and paste what you say. They did enjoy my tagline, "you only live twice", and gave it an overall positive spin.
Article also here: http://www.abc.net.au/triplej/programs/hack/cheating-death-with-cryonics/7662164
comment by polymathwannabe · 2016-08-03T01:13:38.483Z · LW(p) · GW(p)
Elo has 145 negative votes for the past month. This is getting ridiculous. What's Eugine trying to prove?
Replies from: gjm, skeptical_lurker, username2, knb↑ comment by gjm · 2016-08-04T17:13:52.693Z · LW(p) · GW(p)
It's worse than that.
Right now his last-30-days figure is -131. But his votes are at 37% positive, not 0%. That means he's actually on something like +187-318 in the last 30 days.
I wouldn't say "getting ridiculous", though. Eugine has been doing the same to me for months.
↑ comment by skeptical_lurker · 2016-08-04T21:21:47.707Z · LW(p) · GW(p)
To say something somewhat controversial, while I disprove of Eugine's mass downvoteing, it really isn't that bad compared to the standard of discourse and dirty tactics used elsewhere. On reddit the site admins have allegedly been altering the votes and engaging in censorship; there has also been doxing and blackmail. Offline you could be fired, beaten up, or even, in extremis, killed for having the wrong political views, and I'm not just talking about people who live in a warzone or North Korea - the same thing does happen, albeit less frequently, in the US or UK.
I think Eugine thinks he is just trying to go tit for tat against the left. This isn't a justification I agree with - I think LW should be held to a higher standard than reddit - but I can see how it would make sense from his point of view.
Replies from: gjm, Elo↑ comment by gjm · 2016-08-04T21:54:07.940Z · LW(p) · GW(p)
Some of Eugine's mass-downvoting is obviously aimed at (what he considers) the pinko commie social justice warrior betas of LW. But there's a more obvious explanation for his attacking Elo: Elo is a mod and keeps banning Eugine-sockpuppets, and is therefore The Enemy.
↑ comment by Elo · 2016-08-04T21:29:12.741Z · LW(p) · GW(p)
except I am not the left!
Replies from: gjm, skeptical_lurker↑ comment by skeptical_lurker · 2016-08-04T21:37:34.300Z · LW(p) · GW(p)
What would you say your political views are?
Replies from: Elo↑ comment by Elo · 2016-08-04T21:47:07.410Z · LW(p) · GW(p)
I mostly don't have any.
I think people are in charge of their own destiny; which is kinda capitalist-libertarian leaning. I know that not everyone is dealt the same hand in life, but I am not personally motivated to change that. I think success is hard but can be worked on.
I have strong pro- voluntary euthenasia choice views, and it's the only thing I have ever been active about (which I am being active about now, but not on LW).
I am certainly not part of any clear and obvious party group that could be identified. On top of that I am not even in america, so trying to fight me on the internet is relatively irrelevant to an american.
(not sure if this answers your question/reason for asking - can you be more specific?)
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2016-08-04T21:54:05.682Z · LW(p) · GW(p)
(not sure if this answers your question/reason for asking - can you be more specific?)
Its a good enough answer, although I was wondering what you had said to annoy Eugine so much.
Of course, on some issues, libertarian positions agree with SJW positions, so maybe Eugine thinks you are more left-wing then you really are?
Replies from: Elo↑ comment by Elo · 2016-08-04T22:13:49.405Z · LW(p) · GW(p)
yep. What gjm said.
Elo is a mod and keeps banning Eugine-sockpuppets, and is therefore The Enemy.
Other mods before my mod-time imposed a ban on Eugine for sock-puppet voting behaviour and then proceeded to let him post when he made new accounts/ the period of identifying a new Eugine takes time so his posts would stay up for weeks at a time. Also if you ban an account without deleting the posts then his mark stays. I stepped in and won't let any of his posts stand more than a day because he is banned, he also has a habit of posting in identifiable ways, repeatedly - up to 10 times, trying to post literally the same thing. Him Posting is just him trying to break the ban. Which STILL STANDS FOR REASONS OF ONGOING SOCK PUPPET VOTING.
If he came back, didn't sockpuppet, and picked a new alias - we probably wouldn't know. But idiots with a vengence... It's relatively easy to work out who he is when he posts, or sockpuppet downvotes. We are working on a reverse-downvote solution. I don't have any interest in pressuring Dev volunteers so it will happen when it happens. Feel free to PM me if you want to join the development of a solution. I think python is the language it will be happening in.
Replies from: philh↑ comment by username2 · 2016-08-03T10:48:25.754Z · LW(p) · GW(p)
I have no idea, but I think it has more to do with his personal rage than anything strategic, because I don't see what exactly can you achieve by such a blatantly obvious behaviour.
Replies from: Viliam, skeptical_lurker↑ comment by Viliam · 2016-08-03T15:39:03.466Z · LW(p) · GW(p)
what exactly can you achieve by such a blatantly obvious behaviour
If I remember correctly, discouragement of "wrongthinkers" is Eugine's explicit goal.
One possible outcome is discouragement of Elo (and other users who get downvoted regularly). Other possible outcome is discouragement of people who would like to express opinions different from Eugine, but don't want to get involved in the ongoing karma manipulation, so they either avoid the topic, or leave the website. Both outcomes seem compatible with what Eugine wants.
In the spirit of "trivial inconveniences", making people annoyed -- even by something pointless -- can cause significant differences in their behavior. The changes on LW are mostly not about what people, but what people avoid doing.
Eugine failed to make LW a recruitment ground for his cult, but he can at least enjoy watching it burn.
.
(Disclaimer: I actually didn't like the few recent Elo's articles. But that's of course different from going and downvoting every single comment he made.)
↑ comment by skeptical_lurker · 2016-08-04T21:26:56.662Z · LW(p) · GW(p)
This is a criticism I often think whenever there is crude censorship or other similar tactics. I would imagine a lot of political activism is entirely counter productive - humans are not often strategic.
Replies from: Viliam↑ comment by Viliam · 2016-08-05T08:36:07.129Z · LW(p) · GW(p)
I would imagine a lot of political activism is entirely counter productive
I think that people usually take the way that allows them to display their personal "virtue" (i.e. loyalty to the cause) rather than the one that optimizes for what the group is trying to achieve. Which is why movements that originally tried to achieve something meaningful gradually mutate into their own strawpersons.
For example, you can start with "people should be treated equally, even if they are X", but soon you realize that saying positive things about X and negative things about non-X is what gets you perceived as a true believer -- and pointing out exceptions gets you labeled as a heretic, -- so the movement gradually changes into de facto X-supremacism.
Or you can realize that trying to be politically correct at all costs sometimes makes people say stupid things, so you start signalling your intelligence by saying politically incorrect things... and continue doing so even in situations where saying the politically incorrect things makes you stupid.
I guess most people are not aware of participating in this process; they simply do more of what is socially rewarded in their group, and less of what is socially punished, without realizing they are being pushed somewhere they originally didn't aim to be.
↑ comment by knb · 2016-08-03T03:26:33.305Z · LW(p) · GW(p)
He's probably just crazy.
Replies from: Viliam↑ comment by Viliam · 2016-08-03T15:42:43.861Z · LW(p) · GW(p)
Not enough data to make a good guess, but sociopathy would be among my top hypotheses.
Replies from: Lumifer↑ comment by Lumifer · 2016-08-03T16:18:32.652Z · LW(p) · GW(p)
Sociopathy is basically absence of empathy. I'm not sure that mass downvoting (that is, manipulation of pretty meaningless numbers in a database about a web forum) qualifies as evidence pro.
Replies from: knbcomment by ChristianKl · 2016-08-03T10:31:06.501Z · LW(p) · GW(p)
Last week I had a discussion with a person who believed that because a science fiction film said that dolphins use 30% of their brain, dolphins indeed use 30% of their brain and therefore more than humans with their 10%.
It felt a bit painful but it seem like the epistemic hygine of some people in our society is very poor. Various producers of TV shows might have more responsibility for not making facts up than they believe they have.
Replies from: brp, MrMind↑ comment by brp · 2022-10-21T08:20:49.430Z · LW(p) · GW(p)
(Spoilers for Interstellar)
I sat next to person on a flight a few weeks ago who, upon talking about physics, said she thought the movie Interstellar was "amazing" and "scientific". I agreed with her, thinking she was talking about the realistic black hole simulations. No, she was talking about scenes where the main character reaches back in time as a ghost to influence his daughter.
This person was a first-year Ph.D. student in medicine.
So yes, even when science fiction is done relatively carefully, some people will take as "scientific" the parts which have been stretched for better storytelling.
↑ comment by MrMind · 2016-08-04T09:45:42.278Z · LW(p) · GW(p)
Epistemic hygiene can be a bit tricky with fiction that gets it right on something and blatantly gets it wrong on other things, but the blame I believe has to be put on the education system that fails abismally in teaching critical thinking, rather than upon writers who fail at fact-checking.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-08-04T10:57:24.371Z · LW(p) · GW(p)
The issue isn't that writers don't fact check but that they purpusefully make up facts to make their story "better". Those facts then get accepted as true by many people.
comment by Panorama · 2016-08-03T10:37:36.048Z · LW(p) · GW(p)
Medical benefits of dental floss unproven
Replies from: ChristianKl, username2The federal government has recommended flossing since 1979, first in a surgeon general's report and later in the Dietary Guidelines for Americans issued every five years. The guidelines must be based on scientific evidence, under the law.
Last year, the Associated Press asked the departments of Health and Human Services and Agriculture for their evidence, and followed up with written requests under the Freedom of Information Act.
When the federal government issued its latest dietary guidelines this year, the flossing recommendation had been removed, without notice. In a letter to the AP, the government acknowledged the effectiveness of flossing had never been researched, as required.
The AP looked at the most rigorous research conducted over the past decade, focusing on 25 studies that generally compared the use of a toothbrush with the combination of toothbrushes and floss. The findings? The evidence for flossing is "weak, very unreliable," of "very low" quality, and carries "a moderate to large potential for bias."
↑ comment by ChristianKl · 2016-08-05T11:56:51.847Z · LW(p) · GW(p)
Given there no strong evidence for flossing and we still feel like doing it maybe we should also go for newer technology that makes sense like dental probiotics?
comment by moridinamael · 2016-08-01T14:33:23.703Z · LW(p) · GW(p)
I've made an app that would greatly benefit from being open source in order to allow users to write their own plugins. How do I make it open source while still satisfying my capitalist rent-seeking exploitative desires?
Replies from: Lumifer, akvadrako, ChristianKl↑ comment by Lumifer · 2016-08-01T15:47:54.830Z · LW(p) · GW(p)
One way would be to expose an API to plugins but remain closed-source.
Another way would be to open-source the code but not release it under a CC license.
Yet another would be to release the app as {share|nag|beg|cripple|etc}ware.
Yet another would be to make the app free but charge for support and enhancements (freemium).
Replies from: gjm↑ comment by gjm · 2016-08-02T10:54:12.625Z · LW(p) · GW(p)
If the code is available in a form that enables people to build it, that seems likely to reduce sales considerably whatever the licence. (In any case, I don't think CC-ness of the licence is the relevant feature.)
If the source code is available then nagging, begging and crippling are easily removed. (Unless the crippling is a matter of omission and the uncrippling bits are paid for -- but that's just one variety of freemium.)
Your first suggestion, a good plugin API, seems like the way to go. moridinamael, what advantages do you see to open source over a plugin API?
Other possible options:
- Divide the app into two parts. One is open-source and is the part that would be extended by plugins. One is closed-source and has most of the secret sauce in it. Someone buying the app gets the binaries for both parts and the source for the extensible part. Of course this is only any good if you can find a way to split the app up that doesn't kill its efficiency or break its architecture.
- The open-source extensible part might be minimal (just enough to support plugins -- this ends up looking a lot like the "plugin API" option, I think) or maximal (so that the only closed-source part is an "engine" that does some clever thing you are hoping other people can't duplicate) or in between.
- Have part of the app run not on the user's computer or mobile device but on servers under your control. Charge for access to those servers.
- Just make it open source and do something entirely different to satisfy your capitalist rent-seeking exploitative desires :-).
↑ comment by Lumifer · 2016-08-02T17:10:21.582Z · LW(p) · GW(p)
With respect to making the source available, the consequences depend on the target market and the price. For a multi-thousand-dollar product aimed at STEM professionals, yeah, people will bother to recompile without the offending bits. For a mass-market app priced at $0.99 no one will bother.
But I agree that some plugin API looks like the most natural way to proceed.
↑ comment by moridinamael · 2016-08-02T14:40:00.384Z · LW(p) · GW(p)
I agree that creating an API is probably the smartest way to go about it. The "disadvantage" to that approach is that I have to build and maintain an API.
The app architecture as it exists is also somewhat conducive to being "split" into an open and extensible part and a closed engine. However, I might wish I had just gone to the full effort of building an API, so that I don't have to constantly mentally track what parts of the code can reside in the be public modules.
Having the app run on a server is also possible, but I have no familiarity with doing that.
Thanks for your thoughts.
↑ comment by ChristianKl · 2016-08-01T15:00:24.527Z · LW(p) · GW(p)
You don't need to publish a software as open source to allow people to write plugins at all.
As far as the definition of open source goes, nobody considers Windows to be open source just because some of Microsofts customers have access to the source code.
comment by James_Miller · 2016-08-01T03:51:43.749Z · LW(p) · GW(p)
I think we are entering an interesting political equilibrium where we have a significant number of voters who either (a) are not truth-oriented and care mostly about the emotional vibe coming from candidates or (b) believe that candidates would be foolish to tell the truth when it would disadvantage them with type (a) voters. The more voters who fall into types (a) and (b) the less worried candidates will be about telling the truth and the eventual equilibrium is where almost all voters are (a) or (b).
Replies from: Vaniver, Lumifer, ChristianKl↑ comment by ChristianKl · 2016-08-01T13:28:53.591Z · LW(p) · GW(p)
Why do you think there's an equilibrium in this process?
Replies from: James_Miller↑ comment by James_Miller · 2016-08-01T16:39:14.617Z · LW(p) · GW(p)
Trump realized a huge number of voters were in (a), and Clinton realized that the best way to respond to her email problems were to tell blatant lies. (Sorry for being so political but this is my honest answer to a direct question.)
Replies from: ChristianKl↑ comment by ChristianKl · 2016-08-01T20:30:11.072Z · LW(p) · GW(p)
That doesn't mean that this happens to be an equilibrium. An equilibrium suggests that a system is stable.
Trump realized a huge number of voters were in (a)
After reading the article of his ghostwriter I'm not sure that's true. It seems more likely to me that Trump is a psychopath in the clinical sense who actually lies all the time and couldn't simply switch to being truthful.
The suggestion that Clinton lied in the email scandal is also illustrative that lying isn't something she does all the time but only when she thinks she needs to. With Trump that's very different. Take the lie about him opposing the Lybia invasion. Would have any other Republican that was running.
Trump tells that lie so convincingly that even Peter Thiel alluded that differences in handling Lybia were a reason for supporting Trump in his speech.
comment by Artaxerxes · 2016-08-01T04:03:50.211Z · LW(p) · GW(p)
What's the worst case scenario involving climate change given that for some reason no large scale wars occur due to its contributing instability?
Climate change is very mainstream, with plenty of people and dollars working on the issue. LW and LW-adjacent groups discuss many causes that are thought to be higher impact and have more room for attention.
But I realised recently that my understanding of climate change related risks could probably be better, and I'm not easily able to compare the scale of climate change related risks to other causes. In particular I'm interested in estimations of metrics such as lives lost, economic cost, and similar.
If anyone can give me a rundown or point me in the right direction that would be appreciated.
Replies from: turchin, Lumifer, qmotus↑ comment by turchin · 2016-08-01T09:32:09.110Z · LW(p) · GW(p)
Runaway global warming - small probability event with extinction level consequences. http://arctic-news.blogspot.ru/
Replies from: morganism, None↑ comment by morganism · 2016-08-01T21:00:21.489Z · LW(p) · GW(p)
Just to say, the clathrate gun has probably gone off.
The shallow sea up between Alaska and Russia is now outgassing methane, and it is producing a stationary high pressure ridge somehow. That weather front has been there for 3 years now, pushing winter storm tracks down to the SW states. That is also hindering rain which may have fallen in California, ending up down in Mexico. There is another stationary high out by Iceland,just reported this year, perhaps caused by the same thing.
Also be aware that the big conference held last year in England on ocean feedbacks, dis-invited the two top field researchers, and only had academics present papers. Prime one is Sharkova, who has done fieldwork throughout the arctic.
http://www.weatherwest.com/archives/tag/ridiculously-resilient-ridge
https://en.wikipedia.org/wiki/Ridiculously_Resilient_Ridge
http://motherboard.vice.com/blog/the-unusual-weather-pattern-at-the-root-of-californias-drought
and the scientists discuss forcings at sea
http://forum.arctic-sea-ice.net/index.php
Replies from: turchin↑ comment by turchin · 2016-08-01T22:23:17.602Z · LW(p) · GW(p)
I think that climate change is a situation where we should directly go to plan B. Plan A here is cutting emissions. It is not working, because it is very expensive and require cooperation of all sides. It also will have immediate results and the temperature will still grow by many reasons.
The plan B in climate change prevention is changing opacity of earth atmosphere. It could be surprisingly cheap and local. There are suggestions to put something as simple as sulfuric acid in the upper atmosphere to rise it reflection ability.
"According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft." https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/
The problem with that approach is that it can't be stopped. As Seth Baum wrote smaller catastrophe could result in disruption of such engineering and immediate return of global warming with vengeance.
There are other ways to prevent global warming. Plan C is creating artificial nuclear winter by volcanic explossion or starting large scale forest fires with nukes.
There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in ocean and dispersing carbon capturing mineral olivine.
So we are not even closed to be doomed from global warming - but we may have to change the way we react on them. We must agree that cutting emission is not working in next 10-20 years perspective.
Replies from: Lumifer, qmotus↑ comment by Lumifer · 2016-08-02T17:04:07.793Z · LW(p) · GW(p)
There are other ways to prevent global warming. Plan C is creating artificial nuclear winter by volcanic explossion or starting large scale forest fires with nukes.
Goes straight into the "Shit LW people say" bucket.
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2016-08-04T21:36:33.701Z · LW(p) · GW(p)
I think starting forest fires with flamethrowers, or cooling the Earth by painting things white, is probably the less exciting but more sensible approach.
Replies from: Lumifer↑ comment by Lumifer · 2016-08-05T14:32:14.614Z · LW(p) · GW(p)
Frankly, I wouldn't use the word "sensible" anywhere near these approaches :-/
Replies from: skeptical_lurker↑ comment by skeptical_lurker · 2016-09-11T14:51:44.635Z · LW(p) · GW(p)
Hang on, wouldn't starting forest fires create more CO2?
↑ comment by qmotus · 2016-08-02T10:15:14.090Z · LW(p) · GW(p)
I would still be a bit reluctant to advocate climate engineering, though. The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said. Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that there safe?. Of course, if we believe that history will end anyways within decades or centuries because of singularity, long-term effects of such measures may not matter so much.
Also, many people, whether or not they're environmentalists strictly speaking, care about keeping our ecosystems at least somewhat undisrupted, and large scale climate engineering doesn't fit too well with that view.
But I agree that we're not progressing fast enough with emissions reductions (we're not progressing with them at all, actually), so we'll probably have to resort to some kind of plan B eventually.
Replies from: Lumifer, turchin↑ comment by Lumifer · 2016-08-02T17:06:45.491Z · LW(p) · GW(p)
The main worry, of course, is that if we choose that route, we need to commit to in the long term, like you said.
I don't know about that. I would expect the main worry to be that the Law of Unintended Consequences will do its usual thing except this time the relative size of its jaws compared to our ass will be... rather large.
↑ comment by turchin · 2016-08-02T11:02:46.641Z · LW(p) · GW(p)
In current political situation in the world cutting emissions can't be implemented. Point.
It may happening naturally in 20 years after electric transportation will take place.
Plan B should be implemented if situation suddenly change to worse. If temperature jumps 3-5 C in one year. In this case the only option we had is to bomb Pinatubo volcano to make it erupting again.
But if we will have prepared and tested measures of Sun shielding, we could start them if situation will be worsening.
It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions). But the same logic works in the opposite direction. They will not cut emission to press policymakers to implement plan B. ))) It looks like prisoners dilemma of two plans.
Replies from: qmotus↑ comment by qmotus · 2016-08-02T11:56:00.558Z · LW(p) · GW(p)
It all looks like political fight between Plan A and Plan B. You suggest not to implement Plan B as it would show real need to implement Plan A (cutting emissions).
That's one thing. But also, let's say that we choose Plan B, and this is taken as a sign that reducing emissions is unnecessary and global emissions soar. We then start pumping aerosols into the atmosphere to cool the climate.
Then something happens and this process stops: we face unexpected technical hurdles, or maybe the implementation of this plan has been largely left to a smallish number of nations and they are incapable or unwilling to implement it anymore, perhaps a large-scale war occurs, or something like that. Because of the extra CO2, we'd probably be worse off than if we had even partially succeeded with Plan A. So what's the expected payoff of choosing A or B?
As I said, I'm a bit wary of this, but I also think that it's important to research climate engineering technologies and make plans so that they can be implemented if (and probably when) necessary. The best option would probably be a mixture of plans A and B, but as you said, it looks like a bit of a prisoner's dilemma.
Replies from: turchin, turchin↑ comment by turchin · 2016-08-02T12:50:01.731Z · LW(p) · GW(p)
One more thing I would like to add: The management of climate risks depends of their predictability and it seems that it is not very high. Climate is very complex and chaotic system.
It may react unexpectedly on our actions. This means that longterm actions are less favourable. The situation could change many times during their implementation.
The quick actions like solar management are better for management of poor predictable processes, as we could see result of our action and quickly cancel them or make them stronger if we don't like the results.
↑ comment by turchin · 2016-08-02T12:39:50.659Z · LW(p) · GW(p)
I would also advocate for the mixture of both plans.
One more reason for it is that they work on different timescale. Cutting emission and removing CO2 on current level of technologies would takes decades to have an impact on climate. But geo-engineering has reaction time around 1 year so we could use it to cover bumps on the road.
Such covering will be especially important if we consider the fact that even if we completely stop emissions, we could also stop global dimming from coal burning which would result in 3 C jump. Stopping emissions may result in temperature jump and we need protection system in this case.
Anyway we need to survive until stronger technologies. Using nanotech or genetic engineering we could solve worming problem with smaller efforts. But we have to survive until with date.
It looks for me that cutting emissions is overhyped and solar management is "underhyped" in public opinion and funding. And by changing with misbalance we could get more common good.
Replies from: morganism↑ comment by morganism · 2016-08-02T22:11:28.553Z · LW(p) · GW(p)
Actually, some of the geoengineering has been tried/studied, the acid in atmo, and the ocean dumped iron.
The iron addition worked quite well, quadrupling the sea catch of fish, and creating a bloom of aquatic algae and plankton. If we can dump in a priming of shell creating plankton, along with the iron, you can pull down quite a bit of ocean CO2, and send it to sea bottom. I think they have only published one sci paper on it, but a couple other papers on econ gains were put out before they raided and arrested the folks that carried it out. The Canandian natives were pretty pleased with the results, tho.
The other easily reversible technique would be solar reflectors in orbit. These could be dual tasked to be solar power satts, to offset some power production. If you sent that power to the most polluting countries grids, they could de-commission some of the worst power plants and cement factories. Studies in progress, models launched to test, lots of theoretical studies under solar sail tech. And the SLS needs something to launch.....
↑ comment by [deleted] · 2016-08-05T00:55:49.429Z · LW(p) · GW(p)
Maybe someone better at statistics than me (I'll withhold my own suspicions of the answer) can answer a question:
Given that life has been around for billions of years without very many huge extinction events, it seems likely that the environment is a very stable system and runaway global warming is false.
However, if the environment is an extremely unstable system, such that runaway global warming always results in total extinction of all life, then the anthropic principle comes info effect.
So my question is: Can we actually then say that runaway global warming is a small probability event?
Replies from: gjm, turchin↑ comment by gjm · 2016-08-05T08:52:24.802Z · LW(p) · GW(p)
Well, we could look at other planets that show any sign of ever having had an Earthlike atmosphere. Here's (I think) the list of such planets we know about and are able to observe: {Venus}.
That might be a pretty bad sign, but I'm not sure Venus's history is similar enough to earth's. (E.g., whatever got its atmosphere the way it is, it probably wasn't overproduction of CO2 by burning fossil fuels. Though, actually, I'm not sure how we'd know.)
Replies from: turchin↑ comment by turchin · 2016-08-05T12:50:11.594Z · LW(p) · GW(p)
Venus doesn't have magnetic field. Because of it, Venus lost hydrogen from its atmosphere due to solar wind. Because of it Venus became very dry. So, it had not life and ways to fix CO2 in carbonates in water. It resulted in large accumulation of CO2 in atmosphere and strong greenhouse effect. It changed the way its mantle creates continents as dry mantle is not plastic. There is no plate tectonics on Venus. The surface changes every half a billion years in one large "supervolcanic" event.
Replies from: garabik↑ comment by garabik · 2016-08-07T08:02:11.186Z · LW(p) · GW(p)
There is also the little issue of Venus receiving about twice the insolation than Earth....
Replies from: turchin↑ comment by turchin · 2016-08-09T11:34:37.933Z · LW(p) · GW(p)
But Venus albedo is 0.75, while Earth's is 0.3. So Venus gets less solar energy than Earth, because of very white upper cloud cover http://www.universetoday.com/36833/albedo-of-venus/
↑ comment by turchin · 2016-08-05T12:45:06.401Z · LW(p) · GW(p)
I think that we strongly underestimate not only probability of runaway global warming but also fragility of our environment because of anthropic bias.
May be runaway global warming is long overdue in our planet and small human actions could trigger it.
I wrote an article about it several years ago, but I want to rewrite it completely. The current version is here: "Why anthropic principle stops to defend us: observation selection, future rate of natural disasters and fragility of our environment" http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defend-us-observation-selection-and-fragility-of-our-environment
↑ comment by Lumifer · 2016-08-01T15:52:33.428Z · LW(p) · GW(p)
The absolute worst case? Probably involves simultaneous and rapid release of the clathrates and the melting of the permafrost, a major disruption of the weather (in particular, precipitation) patterns across the globe, ocean currents -- notably the Gulfstream -- changing their course, etc. Ah, go read any horror fiction by environmentalists. They wrote a lot.
↑ comment by qmotus · 2016-08-02T10:04:41.224Z · LW(p) · GW(p)
I think many EAs consider climate change to be very important, but often just think that it receives a lot of attention already and solving it is difficult, and that there are therefore better things to focus on. Like 80 000 hours for example.
comment by turchin · 2016-08-03T11:58:42.099Z · LW(p) · GW(p)
Any thoughts? "Musk-backed startup that wants to give away its artificial intelligence research, also wants to make sure AI isn’t used for nefarious purposes. That’s why it wants to create a new kind of police force: call them the AI cops."
http://www.wired.com/2016/08/openai-calling-techie-cops-battle-code-gone-rogue/
Replies from: ChristianKl, Vaniver↑ comment by ChristianKl · 2016-08-04T11:01:26.012Z · LW(p) · GW(p)
Companies like Google are going to invest more money into developing AI than OpenAI itself does. As such OpenAI can't fullfil it's mission by going alone. What it can do is devise clever systems to interact with other stakeholders and build relationships to push things in the right direction.
Single moves on the chessboard also don't reveal underlying strategy.
comment by morganism · 2016-08-01T21:06:28.888Z · LW(p) · GW(p)
Algorithms as a Microservice
Lots of folks saying not to bother learning coding now, it will be done by machine learning soon, so here is how to monetize your academic expertise...
http://thenewstack.io/algorithmia-new-algorithm-economy/
Replies from: morganism, ChristianKl↑ comment by morganism · 2016-08-13T22:40:44.330Z · LW(p) · GW(p)
‘talk to a physicist’ service
https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists
↑ comment by ChristianKl · 2016-08-02T06:45:43.098Z · LW(p) · GW(p)
Lots of folks saying not to bother learning coding now, it will be done by machine learning soon, so here is how to monetize your academic expertise...
Optimizing aspects of coding with Machine learning means that coders get more productive because some tasks they do can be automatized. I don't think it's a reason against learning coding.
Replies from: morganism↑ comment by morganism · 2016-08-06T20:49:58.595Z · LW(p) · GW(p)
Is good to learn some python at least but..
"Computer program fixes old code faster than expert engineers:
One unexpected byproduct of the work is that it lets researchers see the different tricks that programmers used on the old code, such as archaeologists combing through computational fossils.
“We can see the ‘bit hacks’ that engineers use to optimize their algorithms,” says Amarasinghe, “as well as better understand the larger context of how programmers approach different coding challenges.”
http://news.mit.edu/2015/computer-program-fixes-old-code-faster-than-expert-engineers-0609
Replies from: ChristianKl↑ comment by ChristianKl · 2016-08-08T08:32:14.029Z · LW(p) · GW(p)
"Computer program fixes old code faster than expert engineers:
Fixing old code isn't a job that most computer programmers want to do. In this particular example it's simply code that works as a better compiler. It's similar to how using Amazons S3 is easier then configuring your own server. Computer programmers get empowered by tools.
Replies from: morganism↑ comment by morganism · 2016-08-13T22:17:41.309Z · LW(p) · GW(p)
Here is a new tool architecture
Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge
http://arxiv.org/abs/1608.03000
Replies from: ChristianKl↑ comment by ChristianKl · 2016-08-14T11:57:24.588Z · LW(p) · GW(p)
The problem of programming isn't to translate natural language into computer code. It's to think about both the structure of the problem and the structure of the solution.
In practice programmers also want to write safe code. The architecture you link to looks like it doesn't make clear gurantees about the data. It uses a neural network that works a bit like a black box. There are likely usecases where such a tool is useful but I don't think it will bring replace the need for programmers in a meaningful way.
comment by polymathwannabe · 2016-08-01T20:37:32.868Z · LW(p) · GW(p)
To illustrate the topic I wish to present, I'll quote a review for Harry Potter and the Cursed Child, which complains that
In Rowling’s novels, characters deliver a mix of clever repartee and thudding exposition. Here Thorne [...] defaults to the latter. The result is a play that fails to utilize the most elementary of playwright’s tools: subtext. Characters say exactly what they feel, explain exactly what is happening, and warn about what they’re going to do before they do it.
My everyday failure to handle indirect statements may relate to this (as well as the disagreements I've had with literature majors, and my own difficulties when writing): I have no patience for subtext. People saying exactly what they feel is the way I wish the world worked. Is there something wrong with me?
Replies from: ChristianKl, skeptical_lurker, Viliam, MrMind↑ comment by ChristianKl · 2016-08-01T20:42:35.523Z · LW(p) · GW(p)
Have you spent time with people practicing Radical Honesty? I didn't get how it worked from reading articles about it but in practice the folks in that community are quite nice.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-08-01T21:05:15.736Z · LW(p) · GW(p)
No such community exists near me.
↑ comment by skeptical_lurker · 2016-08-04T21:34:18.205Z · LW(p) · GW(p)
Subtext is harder to understand than communicating clearly, and so subtext can be enjoyable and signal intelligence in the same way that playing chess is more fun and shows more intelligence than playing tic-tac-toe.
I far prefer subtext in a story to in real life. In a story the worst thing that can happen is for you to beleive that 'animal farm' really is about a bunch of animals. In real life the worst that can happen is that the pilot doesn't realise that when the navigator says 'the weather radar certainly is useful' the subtext is that the weather is too sever to fly in, and promptly flies the plane into a mountain. This actually happened.
↑ comment by MrMind · 2016-08-04T09:51:32.378Z · LW(p) · GW(p)
Is there something wrong with me?
Possibly, yes.
Humans have evolved to account for a lot of subtexts, often in the form of theory of mind and empathy.
It's considered poor form in a novel (as per the "show, don't tell" motto); I guess in a play, where you can visibly see emotions in the actors' faces, it's even more redundant and dull.
comment by [deleted] · 2016-08-07T14:54:34.230Z · LW(p) · GW(p)
The Neglected/Tractable/Scale framework for cause prioritisation is a blatant rip off of the Hanlon method (See page 22 of Priority Setting for Research for Health. Economists, you've made a name for yourself cannibalising other disciplines. You underestimate health, however - biostatisticians, epidemiologists and clinical researchers have fended off 'health economists' for years. With this, LessWrong, I say goodbye.