Posts

Comments

Comment by greylag on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-07T18:18:06.424Z · score: 2 (2 votes) · LW · GW
Large projects behave similarly regardless of whether we are talking civil infrastructure, oil & gas, energy, mining, aerospace...

The industrial aspect of MCB seems to be "numerous, autonomous boats spraying water". Building a lot of adequately-reliable boats doesn't sound like your typical megaproject, but more of an assembly-line job, something like liberty ships. Adequately developing the process of managing large numbers of drone ships might be a pre-requisite, and doubtless has other military and civil applications.

(Of course, whether MCB affects the climate as hoped is another question altogether).

Comment by greylag on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-07T06:22:02.303Z · score: 1 (1 votes) · LW · GW

why did you post this in the answers section?

Oh. By accident - sorry! Ah, there is a “move to comments” button. I will press it.

Comment by greylag on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T12:34:48.261Z · score: 2 (2 votes) · LW · GW

Is there such a thing as “EA, but for carbon offsetting”? I can imagine an organisation that would invest in a weighted mix of direct carbon capture, lobbying, geoengineering, funding renewable energy, research, ...

Comment by greylag on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-06T12:32:41.868Z · score: 3 (4 votes) · LW · GW

... public support for any sort of emission control will evaporate the moment geoengineering is realised as a tolerable alternative. Once the public believe, there will never be a quorum of voters willing to sacrifice anything of their own to reduce emissions

More precisely, public support for emission control that requires personal sacrifice. Energy efficiency measures have been estimated to cost “substantially less than the cost of meeting electricity needs with new power plants”, for example.

Comment by greylag on Is there a user's manual to using the internet more efficiently? · 2019-08-06T06:34:24.774Z · score: 3 (2 votes) · LW · GW

I think the more practical ideas in it (custom RSS readers?) are outdated.

Comment by greylag on Is there a user's manual to using the internet more efficiently? · 2019-08-05T05:51:40.976Z · score: 1 (1 votes) · LW · GW

Sound a bit like Howard Rheingold’s Net Smart.

Comment by greylag on Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? · 2019-07-31T07:05:46.304Z · score: 1 (1 votes) · LW · GW

Autonomy can allow for higher density by:

1) at worst, reducing, at best, eliminating, the headway between vehicles that's needed to allow human drivers to react (@shminux's "zooming in all directions")

2) in busy times and locations, aggregate multiple journeys into multiple-occupancy vehicles running ad-hoc routes. (I think that's what the OECD "shared mobility liveable cities" study is proposing; UberPool is similar; Citymapper's "smart buses" are similar (though all with human drivers))

Comment by greylag on Will autonomous cars be more economical/efficient as shared urban transit than busses or trains, and by how much? What's some good research on this? · 2019-07-31T07:00:35.446Z · score: 2 (2 votes) · LW · GW

@makoyass I think you would be interested in The End of Traffic and the Future of Access . I haven't read it, though I have read some of Levinson's other work; it's a bit on the dry-and-wonkish side, but I expect you would prefer that to "rabid conflict-theorist", and it's covering the right sort of ground.

Comment by greylag on Black hole narratives · 2019-07-08T19:53:25.662Z · score: 1 (1 votes) · LW · GW

In that case, I have completely misunderstood.

Comment by greylag on Black hole narratives · 2019-07-08T07:21:26.195Z · score: 3 (2 votes) · LW · GW

(Epistemic status: improvising wildly)

You are being outdebated because you are arguing with a memeplex evolved for dragging people into paroxysms of ambiguous guilt. (More prosaically, you can be outargued by, say, a car salesman; if they have convinced you to spend much more money than you intended and there is this really good offer available right now, this means they are better at this than you, which is their job).

I suspect the ambiguity is important. As Said said, equivocation; Motte and Bailey is similar.

My problem with all this: sometimes intense guilt IS the appropriate response. There may be an aspect from which your behaviour is, in fact, reprehensible! This seems to rule out hard and fast “well, don’t let people make you feel guilty” heuristics.

In passing, taboo “narrative”. It has least two distinct meanings: a particular story (usually linear), or something more like a worldview, ideology or doctrine.

Comment by greylag on Black hole narratives · 2019-07-08T06:59:45.880Z · score: 4 (3 votes) · LW · GW

(Epistemic status: snowclone in the style of Scott Alexander) “Haters gonna hate,” said Taylor, Swiftly

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-04T06:02:25.897Z · score: 3 (2 votes) · LW · GW

Calvinism resembles abusive parenting more than any sort of ethical principle.

I think this might be an important distinction:

“We are all flawed/evil and have to somehow make the best of it” (Calvinism, interpreted charitably)

vs

“YOU are evil/worthless” said to a child by a parent who believes it (abusive parenting, interpreted uncharitably, painted orange and with a bulls eye painted on it)

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-04T05:55:52.035Z · score: 1 (1 votes) · LW · GW

I think you get similar answers whether consequentialist or deontological.

Consequentialist: the consequences end up terrible irrespective of your actions.

Deontological: the set of rules and duties is contradictory (as you suggest) or requires superhuman control over your environment/society, or your subconscious mind.

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-03T19:11:51.827Z · score: 3 (2 votes) · LW · GW

[F/X: penny drops]. Thank you.

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-03T19:08:13.359Z · score: 3 (2 votes) · LW · GW
Calvinists definitely believe ethical improvement is possible

I'm a stranger to theology so maybe I'm misunderstanding, but it sounds like Calvin thought people had free will, but other "Calvinists" thought one's moral status was predestined by God, prohibiting ethical improvement. (I assume John Calvin isn't going to exercise a Dennett/Conway Perspective Flip Get-Out-Clause, because if he does, the universe is trolling me).

(Aside: I sometimes think atheists with Judeo-Christian heritage risk losing the grace and keeping the damnation)

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-03T18:52:22.121Z · score: 2 (2 votes) · LW · GW
steer in the right direction: make things around you better instead of worse, based on your intrinsically motivating discernment ... try to make nicer things happen. And get more foresight, perspective, and cooperation as you go, so you can participate in steering bigger things on longer timescales using more information.

This seems kind of like... be a good person, or possibly be good, or do good. And I can't square it with:

accept that you are irredeemably evil

If you're irredeemably evil, your discernment is not to be trusted and your efforts are surely futile!

Is jessicata writing as if being a good person implies being thoroughly good in all respects, incapable of evil, perhaps incapable of serious error, perhaps single-handedly capable of lifting an entire society's ethical standing? That's a very tall order. I don't think that's what "good person" means. I don't think that's a reasonable standard to hold anyone to.

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-03T18:19:19.634Z · score: 11 (5 votes) · LW · GW
being a "good person" requires having properties X, Y, and Z. Well, it turns out that no one, or nearly no one, has properties X&Y&Z, and also couldn't achieve them quickly even with effort. Therefore, no one is a "good person" by that definition.

Some examples of varying flavour, to see if I've understood:

Being a good person means not being racist, *but* being racist involves unconscious components (which Susan has limited control over because they are below conscious awareness) and structural components (which Susan has limited control over because she is not a world dictator). Therefore Susan is racist, therefore not good.

Being a good person means not exploiting other people abusively, *but* large parts of the world economy rely on exploiting people, and Bob, so long as he lives and breathes, cannot help passively exploiting people, so he cannot be good.

Alice likes to think of herself as a good person, but according to Robin Hanson, most of what she is doing turns out to be signalling. Alice is dismayed that she is a much shallower and more egotistical person than she had previously imagined.

Comment by greylag on Self-consciousness wants to make everything about itself · 2019-07-03T18:01:38.370Z · score: 1 (1 votes) · LW · GW

Kaj, I'm having real trouble parsing this:

the "nobody is good" move, when it works, [changes] your concepts so as to redefine the "evil" act to no longer be evidence about you being evil. But ... this move only works to heal the underlying insecurity when your brain considers the update to be true - which is to say, when it perceives the "evil act" to be something which your social environment will no longer judge you harshly for, allowing the behavior to update.

Would you clarify?

Comment by greylag on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T19:27:31.687Z · score: 6 (4 votes) · LW · GW
About 80% of Americans think “political correctness is a problem”; and even when you restrict to self-identified liberals, Democrats, or people of color, large majorities agree with the statement.

This is interesting to me because it's surprising: I'd expect a sharper ideological divide on whether PC is a problem or not. I don't think "PC is a problem" reliably means "I have a high tolerance for verbal conflict"; "PC is a problem" can be read as "people trying to enforce political correctness are picking fights for no good reason and escalating verbal conflict".

Americans have become more tolerant of allowing people with controversial views to speak in public

This is an old-fashioned "Who should be allowed to speak in a town hall meeting in favour of outrageous opinion X?" sort of question. Frankly, this is a pattern-matched answer: "Yes, we believe in freedom of speech". I'm not sure the answer would be the same if the question were "Social media is full of (highly persuasive) advocacy for outrageous opinion X - is this acceptable?".

moderate liberals... against free speech

Hm. Why? Some explanations plucked out of thin air:

(a) To oppose free speech, you have to have enough people on "your side" that you might *succeed*, or that your training/experience has been in a situation where opposing free speech might succeed. (Large concentrations of moderate-left-liberals on a campus)

(b) Groups who have decided that "too much free speech is a problem" come from some particular community incompatible with being on the radical left. You're not going to find many hard-left-wingers in the RAND corporation thinking about counterinsurgency strategy; a SJW memeplex sweeping across college freshmen is going to do better if the freshmen don't have to already be Marxist believers to partake.

I think I'm suggesting that there might be *confounders* on the political-spectrum/free-speech-advocacy graph: "being a campus liberal causes free-speech-opposition and causes moderate-left beliefs" seems much more plausible to me than "there's a spontaneous peak in censorship advocacy at this point in the political spectrum".

The most passionate opponents of chaos are likely to be powerful, since change can only knock them off their pedestals

Opponents of chaos will be people with something to lose or something to protect. The ultra-rich 0.01% have the endurance to ride out most consequences this side of Armageddon, and the more excitable ones might see the chaos as an opportunity, or as a necessary evil.

Comment by greylag on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T07:48:50.019Z · score: 2 (2 votes) · LW · GW
Renee diResta, ... chilling — a call for social media to be actively regulated by the US military, ... New Knowledge, a firm offering corporations a new kind of service: using algorithms to bury social media scandals that would make them look bad.

This seems a very uncharitable interpretation. Is it deserved?

Comment by greylag on Scrying for outcomes where the problem of deepfakes has been solved · 2019-04-15T12:02:22.609Z · score: 2 (2 votes) · LW · GW

Alternative: notarised alibis as a service. The anonymous (and dubiously valid) video received by the broadcaster has me red handed in the library with the obvious melee weapon, but MY signed and notorised personal cloud of things has me harmlessly drinking in a pub at the same time in a different location, which proves beyond reasonable doubt that the deepfake is fake.

In other words: it’s a tough call ensuring all the wannabe bad actors have adequately sealed and trusted cameras, at which point panoptical surveillance by a vaguely trusted system starts to seem like a good alternative.

(This feels very Brin, so I may have stolen it from him)

Depending how trustworthy the surveillance is, this may merely be an express route to a different dystopia.

Comment by greylag on Extraordinary ethics require extraordinary arguments · 2019-02-18T07:15:52.120Z · score: 2 (2 votes) · LW · GW

A way I would suggest looking at it: your scrupulousity daæmon has its own estimates of prior probabilities for such things as “you being a fundamentally decent person” and “you being on the cusp of accomplishing dreadful evil, maybe by accident”, and those estimates are, respectively, low, and high, because the dæmon shares your mental substrate and inherits the effects of depression.

As with us, the dæmon’s priors guide its theorising. It expects you to accomplish dreadful evil. How? Well, if your prior probability for something is high, and there isn’t a simple explanation, then there must be a complicated explanation! And people are very good at producing complicated hypotheses out of nothing, we’ve been doing it for millennia.

Comment by greylag on The Argument from Philosophical Difficulty · 2019-02-10T07:35:21.029Z · score: 3 (3 votes) · LW · GW

Optimistic scenario 6: Technological progress in AI makes difficult philosophical problems much easier. (Lots of overlap with corrigibility). Early examples: Axelrod’s tournaments, Dennett on Conway’s Life as a tool for thinking more clearly about free will.

(This is probably a special case of corrigibilty).

Comment by greylag on Is Clickbait Destroying Our General Intelligence? · 2018-11-18T22:00:00.582Z · score: 3 (3 votes) · LW · GW

Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there's way too much flash and it ate the substance, it's showing off way too hard.

Hypotheses:

  1. This is the millennia-long tension between Enlightenment and Romanticism. Romanticism feels deeply wrong to someone on team Enlightenment, especially when stealing Enlightenment’s science fictional tropes!
  2. A cultural Idea Trap. Great Stagnation gives you Cyberpunk. (Doubtful, suspect events occurred in wrong order)
Comment by greylag on Is Clickbait Destroying Our General Intelligence? · 2018-11-18T21:48:20.847Z · score: 5 (3 votes) · LW · GW

Early Heinlein, because my parents didn't want me reading the later books.

This seems like exceptionally good judgement.

Comment by greylag on Is Clickbait Destroying Our General Intelligence? · 2018-11-18T21:45:53.015Z · score: 1 (1 votes) · LW · GW

the intense competition to get into Harvard is producing a monoculture of students who've lined up every single standard accomplishment and how these students don't know anything else they want to do with their lives

This is Goodhart’s Law run riot, yes?

Comment by greylag on Where is my Flying Car? · 2018-10-18T19:05:25.257Z · score: 1 (1 votes) · LW · GW

I might have to read the book. I'm not sure I want to, if it's going to be 60% nostalgia for a future that didn't happen, and 40% blaming "fundamentalist" environmentalists for everything.

For 1000 mph are we talking SSTs or vactrains? Vactrains - depending on pumping losses - might be quite (whisper it, so Josh can't hear) ergophobe. Quiet, too. For SSTs, do we just displace all electric load to nukes, so the cost of kerosene is unimportant, and fly larger Concordes? (Anyone who doesn't like sudden loud noises is an environmentalist!) Do we fly SSTs higher, maybe fueled with cryogenic hydrogen, made from water and low-demand-period electricity?

Undersea cities... don't sound *very* energy-intensive. How do the citizens of Atlantis earn their living?

What would the lunar base be for? Exploration? An observatory? Helium-3? Near-future projections often involved mining the moon then rail-launching ore into orbit - probably solar-powered. Is this ergophobic? It's much more efficient than a chemical rocket...

Comment by greylag on Where is my Flying Car? · 2018-10-17T13:01:33.202Z · score: 1 (1 votes) · LW · GW

Could you give some more examples of "innovating" and "disappointing" industries?

Were those SF writers clueless optimists, making mostly random forecasting errors? No! Josh shows that for the least energy intensive technologies, their optimism was about right...

Arthur C. Clarke's Comsats: energy-intensive, huge success

Asimov's robots: disappointing because intelligence turns out to be harder than we thought.

Asimov's "Psychohistory": disappointing because chaos theory?

James Blish's "Cities in Flight": antigravity and force fields, ironically capable of running off a small zinc-air battery. Disappointing because we haven't found a physics rootkit that interesting.

What if we pick randomly from http://www.technovelgy.com?