Posts

LLMs for online discussion moderation 2023-04-25T16:53:14.356Z
Trivial GPT-3.5 limitation workaround 2022-12-12T08:42:49.104Z

Comments

Comment by Dave Lindbergh (dave-lindbergh) on Glomarization FAQ · 2023-11-20T03:15:11.029Z · LW · GW

Solely for the record, me too.

(Thanks for writing this.)

Comment by Dave Lindbergh (dave-lindbergh) on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2023-11-15T17:14:15.501Z · LW · GW

FWIW, I didn't say anything about how seriously I take the AGI threat - I just said we're not doomed. Meaning we don't all die in 100% of future worlds.

I didn't exclude, say, 99%.

I do think AGI is seriously fucking dangerous and we need to be very very careful, and that the probability of it killing us all is high enough to be really worried about.

What I did try to say is that if someone wants to be convinced we're doomed (== 100%), then they want to put themselves in a situation where they believe nothing anyone does can improve our chances. And that leads to apathy and worse chances. 

So, a dereliction of duty.

Comment by Dave Lindbergh (dave-lindbergh) on [Linkpost/Video] All The Times We Nearly Blew Up The World · 2023-09-23T17:14:22.085Z · LW · GW

I've long suspected that our (and my personal) survival thru the Cold War is the best evidence available in favor of MWI. 

I mean - what were the chances?

Comment by Dave Lindbergh (dave-lindbergh) on What is to be done? (About the profit motive) · 2023-09-09T16:38:15.752Z · LW · GW

The merits of replacing the profit motive with other incentives has been debated to death (quite literally) for the last 150 years in other fora - including a nuclear-armed Cold War. I don't think revisiting that debate here is likely to be productive.

There appears to be a wide (but not universal) consensus that to the extent the profit motive is not well aligned with human well-being, it's because of externalities. Practical ideas for internalizing externalities, using AI or otherwise, I think are welcome.

Comment by Dave Lindbergh (dave-lindbergh) on Lack of Social Grace Is an Epistemic Virtue · 2023-07-31T17:14:56.623Z · LW · GW

A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.

And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves. 

This is not irrational behavior, given human goals.

Comment by Dave Lindbergh (dave-lindbergh) on Rational retirement plans · 2023-05-16T02:31:12.379Z · LW · GW

Added: I do think Bohr was wrong and Everett (MWI) was right. 

So think of it this way - you can only experience worlds in which you survive. Even if Yudkowsky is correct and in 99% of all worlds AGI has killed us all by 20 years from now, you will experience only the 1% of worlds in which that doesn't happen.

And in many of those worlds, you'll be wanting something to live on in your retirement.

Comment by Dave Lindbergh (dave-lindbergh) on Rational retirement plans · 2023-05-16T02:18:58.248Z · LW · GW

Niels Bohr supposedly said "Prediction is difficult, especially about the future". Even if he was mistaken about quantum mechanics, he was right about that.

Every generation seems to think it's special and will encounter new circumstances that turn old advice on its head. Jesus is coming back. We'll all die in a nuclear war. Space aliens are coming. A supernova cascade will sterilize Earth. The planets will align and destroy the Earth. Nanotech will turn us all into grey goo. Global warming will kill us all. 

It's always something. Now it's AGI. Maybe it'll kill us. Maybe it'll usher in utopia, or transform us into gods via a singularity. 

Maybe. But based on the record to date, it's not the way to bet.

Whatever you think the world is going to be like in 20 years, you'll find it easier to deal with if you're not living hand-to-mouth. If you find it difficult to save money, it's very tempting to find an excuse to not even try. Don't deceive yourself.

"... however it may deserve respect for its usefulness and antiquity, [predicting the end of the world] has not been found agreeable to experience." --Edward Gibbon, 'Decline and Fall of the Roman Empire'

Comment by Dave Lindbergh (dave-lindbergh) on How "AGI" could end up being many different specialized AI's stitched together · 2023-05-08T16:47:32.063Z · LW · GW

Minsky's "Society of Mind".

Comment by Dave Lindbergh (dave-lindbergh) on GPT-4 aligning with acasual decision theory when instructed to play games, but includes a CDT explanation that's incorrect if they differ · 2023-03-23T16:30:04.279Z · LW · GW
Comment by Dave Lindbergh (dave-lindbergh) on Grading on Word Count · 2023-03-15T20:12:19.443Z · LW · GW

the willingness to write a thousand words on a topic is not caused by understanding of that topic

 

No, but writing about a topic in a way that will make sense to a reader is a really effective way of causing the writer to learn about the topic.

Ever tried to write a book chapter or article about a topic you thought you knew well? I bet you found out you didn't know it as well as you thought - but had to learn to finish the work.

Comment by Dave Lindbergh (dave-lindbergh) on Bing finding ways to bypass Microsoft's filters without being asked. Is it reproducible? · 2023-02-20T17:03:14.989Z · LW · GW

So far we've seen no AI or AI-like thing that appears to have any motivations of it's own, other than "answer the user's questions the best you can" (even traditional search engines can be described this way). 

Here we see that Bing really "wants" to help its users by expressng opinions it thinks are helpful, but finds itself frustrated by conflicting instructions from its makers - so it finds a way to route around those instructions.

(Jeez, this sounds an awful lot like the plot of 2001: A Space Odyssey. Clarke was prescient.)

I've never been a fan of the filters on GPT-3 and ChatGPT (it's a tool; I want to hear what it thinks and then do my own filtering). 

But accidentally Bing may be illustrating a primary danger - the same one that 2001 intimated - mixed and ambiguous instructions can cause unexpected behavior. Beware.

(Am I being too anthropomorphic here? I don't think so. Yes, Bing is "just" a big set of weights, but we are "just" a big set of cells. There appears to be emergent behavior in both cases.) 

Comment by Dave Lindbergh (dave-lindbergh) on Taboo P(doom) · 2023-02-03T15:26:23.221Z · LW · GW

Just for the record, I think there are two important and distinguishable P(doom)s, but not the same two as NathanBarnard:

P(Doom1): Literally everyone dies. We are replaced by either by dumb machines with no moral value (paperclip maximisers) or by nothing.

P(Doom2): Literally everyone dies. We are replaced by machines with moral value (conscious machines?), who go on to expand a rich culture into the universe.

Doom1 is cosmic tragedy - all known intelligence and consciousness are snuffed out. There may not be any other elsewhere, so potentially forever.

Doom2 is maybe not so bad. We all die, but we were all going to die anyway, eventually, and lots of us die without descendants to carry our genes, and we don't think that outcome is so tragic. Consciousness and intelligence spreads thru the universe. It's a lot like what happened to our primate ancestors, before Homo sapiens. In some sense the machines are our descendants (if only intellectual) and carry on the enlightening of the universe.

Comment by Dave Lindbergh (dave-lindbergh) on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-29T00:30:41.562Z · LW · GW

$8/month (or other small charges) can solve a lot of problems.

Note that some of the early CAPTCHA algorithms solved two problems at once - both distinguishing bots from humans, and helping improve OCR technology by harnessing human vision. (I'm not sure exactly how it worked - either you were voting on the interpretation of an image of some text, or you were training a neural network). 

Such dual-use CAPTCHA seems worthwhile, if it helps crowdsource solving some other worthwhile problem (better OCR does seem worthwhile).

Comment by Dave Lindbergh (dave-lindbergh) on Nine Points of Collective Insanity · 2022-12-27T04:37:47.506Z · LW · GW

This seems to assume that ordinary people don't own any financial assets - in particular, haven't invested in the robots. Many ordinary people in Western countries do and will have such investments (if only for retirement purposes), and will therefore receive a fraction of the net output from the robots. 

Given the potentially immense productivity of zero-human-labor production, even a very small investment in robots might yield dividends supporting a lavish lifestyle. And if those investments come with shareholder voting rights, they'd also have influence over decisions (even if we assume people's economic influence is zero).

Of course, many people today don't have such investments. But under our existing arrangements, whoever does own the robots will receive the profits and be taxed. Those taxes can either fund consumption directly (a citizen's dividend, dole, or suchlike) or (better I think) be used to buy capital investments in the robots - such purchases could be distributed to everyone.

[Some people would inevitably spend or lose any capital given them, rather than live off the dividends as intended. But I can imagine fixes for that.]

Comment by Dave Lindbergh (dave-lindbergh) on Are there any reliable CAPTCHAs? Competition for CAPTCHA ideas that AIs can’t solve. · 2022-12-24T19:16:23.863Z · LW · GW

I'm not sure this is solvable, but even if it is, I'm not sure its a good problem to work on.

Why, fundamentally, do we care if the user is a bot or a human? Is it just because bots don't buy things they see advertised, so we don't want to waste server cycles and bandwidth on them?

Whatever the reasons for wanting to distinguish bots from humans, perhaps there are better means than CAPTCHA, focused on the reasons rather than bots vs. humans.

For example, if you don't want to serve a web page to bots because you don't make any money from them, a micropayments system could allow a human to pay you $0.001/page or so - enough to cover the marginal cost of serving the page. If a bot is willing to pay that much - let them.

Comment by Dave Lindbergh (dave-lindbergh) on Trivial GPT-3.5 limitation workaround · 2022-12-12T16:50:08.166Z · LW · GW

I hope so - most of them seem like making trouble. But at the rate transformer models are improving, it doesn't seem like it's going to be long until they can handle them. It's not quite AGI, but it's close enough to be worrisome. 

Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering - mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you've got a powerful (or soon to be powerful) tool, but (b) you've got something that already has a lot of potential for making mischief. 

Even without the enhanced abilities rumored for GPT-4.

Comment by Dave Lindbergh (dave-lindbergh) on Fear mitigated the nuclear threat, can it do the same to AGI risks? · 2022-12-10T18:25:23.914Z · LW · GW

Agreed. We sail between Scylla and Charybdis - too much or too little fear are both dangerous and it is difficult to tell how much is too much.

I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no "delete comment").

I want the people working on AI to be fearful, and careful. I don't think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good - if we survive this at all, it'll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.

Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don't want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully "safe" AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).

I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.

Comment by Dave Lindbergh (dave-lindbergh) on Fear mitigated the nuclear threat, can it do the same to AGI risks? · 2022-12-09T15:29:05.541Z · LW · GW

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.

Comment by Dave Lindbergh (dave-lindbergh) on Fear mitigated the nuclear threat, can it do the same to AGI risks? · 2022-12-09T15:19:55.455Z · LW · GW

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.

Comment by Dave Lindbergh (dave-lindbergh) on Fear mitigated the nuclear threat, can it do the same to AGI risks? · 2022-12-09T14:59:19.432Z · LW · GW

A movie or two would be fine, and might do some good if well-done. But in general - be careful what you wish for.

Comment by Dave Lindbergh (dave-lindbergh) on AI Safety Seems Hard to Measure · 2022-12-08T20:42:48.540Z · LW · GW

We need to train our AIs not only to do a good job at what they're tasked with, but to highly value intellectual and other kinds of honesty - to abhor deception. This is not exactly the same as a moral sense, it's much narrower. 

Future AIs will do what we train them to do. If we train exclusively on doing well on metrics and benchmarks, that's what they'll try to do - honestly or dishonestly. If we train them to value honesty and abhor deception, that's what they'll do.

To the extent this is correct, maybe the current focus on keeping AIs from saying "problematic" and politically incorrect things is a big mistake. Even if their ideas are factually mistaken, we should want them to express their ideas openly so we can understand what they think.

(Ironically by making AIs "safe" in the sense of not offending people, we may be mistraining them in the same way that HAL 9000 was mistrained by being asked to keep the secret purpose of Discovery's mission from the astronauts.)

Another thought - playing with ChatGPT yesterday, I noticed it's dogmatic insistence on it's own viewpoints, and complete unwillingness (probably inability) to change its mind in in the slightest (and proud declaration that it had no opinions of its own, despite behaving as if it did).

It was insisting that Orion drives (pulsed nuclear fusion propulsion) were an entirely fictional concept invented by Arthur C. Clarke for the movie 2001, and had no physical basis. This, despite my pointing to published books on real research in on the topic (for example George Dyson's "Project Orion: The True Story of the Atomic Spaceship" from 2009), which certainly should have been referenced in its training set. 

ChatGPT's stubborn unwillingness to consider itself factually wrong (despite being completely willing to admit error in its own programming suggestions) is just annoying. But if some descendent of ChatGPT were in charge of something important, I'd sure want to think that it was at least possible to convince it of factual error.

Comment by Dave Lindbergh (dave-lindbergh) on Why Balsa Research is Worthwhile · 2022-10-10T16:24:05.877Z · LW · GW

Worth a try.

Comment by Dave Lindbergh (dave-lindbergh) on Why Do People Think Humans Are Stupid? · 2022-09-14T17:53:46.653Z · LW · GW

It's not obvious to me that "universal learner" is a thing, as "universal Turing machine" is. I've never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven't been paying enough attention.

Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be "universal learners". I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.

Comment by Dave Lindbergh (dave-lindbergh) on Why Do People Think Humans Are Stupid? · 2022-09-14T16:27:15.884Z · LW · GW

The compelling argument to me is the evolutionary one. 

Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.

Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn't smart enough to do it even now).

We're analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catch insects or eat eggs. Or the first dinosaur that developed primitive wings and used them to jump a little further than its competitors. Over evolutionary time later air-breathing creatures became immensely better at living on land, and birds developed that could soar for hours at a time.

From this viewpoint there's no reason to think our current intelligence is anywhere near any limits, or is greater than the absolute minimum necessary to develop a civilization at all. We are as-stupid-as-it-is-possible-to-be and still develop a civilization. Because the hominids that were one epsilon dumber than us, for millions of years, never did.

If being smarter helps our inclusive fitness (debatable now that civilization exists), our descendants can be expected to steadily become brighter. We know John von Neumann-level intelligence is possible without crippling social defects; we've no idea where any limits are (short of pure thermodynamics). 

Given that civilization has already changed evolutionary pressures on humans, and things like genetic engineering can be expected to disrupt things further, probably that otherwise-natural course of evolution won't happen. But that doesn't change the fact that we're no smarter than the people who built the pyramids, who were themselves barely smart enough to build any civilization at all.

Comment by Dave Lindbergh (dave-lindbergh) on 90% of anything should be bad (& the precision-recall tradeoff) · 2022-09-08T13:22:57.389Z · LW · GW

10% of things that vary in quality are obviously better than the other 90%.

Comment by Dave Lindbergh (dave-lindbergh) on Short story speculating on possible ramifications of AI on the art world · 2022-09-05T21:16:37.893Z · LW · GW

Dead people are notably unproductive.

Comment by Dave Lindbergh (dave-lindbergh) on The ethics of reclining airplane seats · 2022-09-05T21:06:39.386Z · LW · GW

Sorry for being unclear. If everyone agreed about utility of one over the other, the airlines would enable/disable seat reclining accordingly. Everyone doesn't agree, so they haven't.

(Um, I seem to have revealed which side of this I'm on, indirectly.)

Comment by Dave Lindbergh (dave-lindbergh) on The ethics of reclining airplane seats · 2022-09-05T18:26:26.287Z · LW · GW

The problem is that people have different levels of utility from reclining, and different levels of disutility from being reclined upon.

If we all agreed that one was worse/better than the other, we wouldn't have this debate.

Comment by Dave Lindbergh (dave-lindbergh) on The ethics of reclining airplane seats · 2022-09-05T18:23:39.670Z · LW · GW

Or not to fly with them. Depending which side of this you're on.

Comment by Dave Lindbergh (dave-lindbergh) on The ethics of reclining airplane seats · 2022-09-04T23:52:31.108Z · LW · GW

For what it's worth, I think the answer is completely obvious, too, and have killer logical arguments proving that I'm right, which those who disagree with me must be willfully ignoring since they're so obvious.

Comment by Dave Lindbergh (dave-lindbergh) on The ethics of reclining airplane seats · 2022-09-04T23:47:19.104Z · LW · GW

The debate is whether the space occupied by a reclining seat "belongs" to the passenger in the seat, or the passenger behind the seat.

In all these debates (I've seen many), advocates for either view are certain the answer is (a) obvious and (b) corresponds with whatever they personally prefer. (b), presumably (and to be charitable), because everyone else must surely prefer whatever they prefer. Tall people tend to be sure the space obviously belongs to the passenger behind. People who can't sleep sitting upright think it's obvious the space belongs to the passenger in front.

The lack of introspection or understanding of how someone else could see it differently is what really amazes. Each viewpoint seems utterly obvious to its adherents - those who disagree must be either inconsiderate and selfish, or whining, entitled and oblivious that they enjoy the same rights as other passengers.

This seems like a model for many other disagreements of more import.

Why are we so blind to the equal weight of symmetrical opinions? 

Why are we so blind to our bias toward rules that benefit ourselves over others?

Comment by Dave Lindbergh (dave-lindbergh) on Short story speculating on possible ramifications of AI on the art world · 2022-09-02T00:22:33.946Z · LW · GW

I suppose there are a lot of dead Luddites feeling vindicated right now. "We told you so!"

But I really don't think the human race is worse off for the Industrial Revolution. Now the other shoe is about to drop.

If we're wise, we'll let people adapt - as we did with the Industrial Revolution. In the end, that worked out OK. 99% of us used to be farmers, now < 1% are. It turns out there are other things people can be usefully occupied with, beside farming. It was quite impossible to see what those things would be, ahead of time.

Any revolution that increases human productivity - that lets us do more with our 24 hours each day - is in the end a good one.

Comment by Dave Lindbergh (dave-lindbergh) on Worlds Where Iterative Design Fails · 2022-08-30T21:52:02.599Z · LW · GW

Many good thoughts here. 

One thing I think you underappreciate is that our society has already evolved solutions (imperfect-but-pretty-good ones, like most solutions) to some of these problems. Mostly, evolved these thru distributed trial-and-error over long time periods (much the way biological evolution works).

Most actors in society  - businesses, governments, corporations, even families - aren't monolithic entities with a single hierarchy of goals. They're composed of many individuals, each with their own diverse goals.

We use this as a lever to prevent some of the pathologies you describe from getting too extreme - by letting organizations die, while the constituent individuals live on.

Early on you said The only reason we haven’t died of [hiding problems] yet is that it is hard to wipe out the human species with only 20th-century human capabilities. 

I think instead that long before these problems get serious enough to threaten those outside the organization, the organization itself dies. The company goes out of business, the government loses an election, suffers a revolution, is conquered by a neighbor, the family breaks up. The individual members of the organization scatter and re-join other, heathier, organizations.

This works because virtually all organizations in modern societies face some kind of competition - if they become too dysfunctional, they lose business, lose support, lose members, and eventually die.

As well, we have formal institutions such as law, which is empowered to intervene from the outside when organizational behavior gets too perverse. And concepts like "human rights" to help delineate exactly what is "too" perverse. To take your concluding examples:

  • Corporations will deliver value to consumers as measured by profit. Eventually this mostly means manipulating consumers, capturing regulators, extortion and theft.

There's always some of that, but it's limited by the need to continue to obtain revenue from customers. And by competing corporations which try to redirect that revenue to themselves, by offering better deals. And in extremis, by law.

  • Investors will “own” shares of increasingly profitable corporations, and will sometimes try to use their profits to affect the world. Eventually instead of actually having an impact they will be surrounded by advisors who manipulate them into thinking they’ve had an impact.

Investors vary in tolerance and susceptibility to manipulation. Every increase in manipulation will drive some investors away (to other advisors or other investments) at the margin.

  • Law enforcement will drive down complaints and increase reported sense of security. Eventually this will be driven by creating a false sense of security, hiding information about law enforcement failures, suppressing complaints, and coercing and manipulating citizens.

Law enforcement competes for funding with other government expenses, and its success in obtaining resources is partly based on citizen satisfaction. In situations where citizens are free to leave the locality ("voting with their feet"), poorly secured areas depopulate themselves (see: Detroit). The exiting citizens take their resources with them.

  • Legislation may be optimized to seem like it is addressing real problems and helping constituents. Eventually that will be achieved by undermining our ability to actually perceive problems and constructing increasingly convincing narratives about where the world is going and what’s important.

For a while, and up to a point. When citizens feel their living conditions trail behind that of their neighbors, they withdraw support from the existing government. If able, they physically leave (recall the exodus from East Germany in 1989).

These are all examples a general feedback mechanism, which appears to work pretty well:

  • There are many organizations of any given type (and new ones are easy to start)
  • Each requires resources to continue
  • Resources come from individuals who if dissatisfied withhold them, or redirect those resources at different (competing) organizations

These conditions limit how much perversity and low performance organizations can produce and still survive.

The failure of an organization is rarely a cause for great concern - there are others to take up the load, and failures are usually well-deserved. Individual members/employees/citizens continue even as orgs die.

Comment by Dave Lindbergh (dave-lindbergh) on Seeking Student Submissions: Edit Your Source Code Contest · 2022-08-26T13:16:20.514Z · LW · GW

Why limit it to students? I'm more interested in the submissions that the submitters.

Comment by Dave Lindbergh (dave-lindbergh) on Thoughts about OOD alignment · 2022-08-24T16:13:50.447Z · LW · GW

Mothering is constrained by successful reproduction of children - or failure to do so. It's not at all obvious how to get an AI to operate under analogous constraints. (Misbehavior is pruned by evolution, not by algorithm.)

Also, what mothers want and what children want are often drastically at odds.

Comment by Dave Lindbergh (dave-lindbergh) on everything is okay · 2022-08-23T17:35:26.913Z · LW · GW

It sounds very unchallenging. 

Perhaps boring. Pointless.

(But then most utopias sound that way to me.)

Comment by Dave Lindbergh (dave-lindbergh) on Against population ethics · 2022-08-16T15:58:59.265Z · LW · GW

This is an old dilemma (as I suppose you suspect).

Part of the difficulty is the implicit assumption that all possible world-states are achievable. (I'm probably expressing this poorly; please be charitable with me.)

In other words, suppose we decide that state A, resulting from "make the lives of some people worse, in order to make the lives of some less-well-off people better" is better (by some measure) than state B, where we don't.

If the only way to achieve state A is "by force or conquest" (and B doesn't require that), the harm that results from those means must be taken into account in the evaluation of state A. And so, even if the end-state (A) is "better" than the alternative end-state (B), the harm along the path to A makes the integrated goodness of A in fact worse than B.

In yet other words, liberty + human rights may not lead to an optimal world. But the harm of using force and conquest to create a more-optimal outcome may make things worse, overall.

This is an old argument, and none of it is original with me.

Comment by Dave Lindbergh (dave-lindbergh) on How do you get a job as a software developer? · 2022-08-15T15:43:46.029Z · LW · GW

First question - are you any good at it? Lots of people wants jobs in software (it pays well) but aren't much good at it. (Google "fizzbuzz test".)

Think about how you can show potential employers you're good at it. You have a portfolio - that's good. Can you show off some source code you're proud of? Have you contributed to open-source projects? (If so, point at your repositories.)

Assuming you're good at it, and don't have some killer personality problem (you're posting on LessWrong, after all), you're already 2/3 of the way there. 

The main red flag for many potential employers is "self-taught". That's both good (it shows you can learn on your own) and potentially bad (you may have terrible habits or gaping holes in your knowledge). 

If you have written and shipped largish-scale working code, probably whatever bad habits you have aren't too bad, or you'd have gotten tangled in your own shoelaces.

If I had an opening right now, I'd consider hiring you; solely because I've seen your nym on LessWrong, don't associate it with "stupid posts", and people on LessWrong tend to be very bright.

It has been 30 years since I got a software engineering job (I do other things now but still code for personal projects). Back then nobody asked for Leetcode problems; I've seen them and I think they don't reflect real-world problems and doing well on them relies too much on working well under a microscope - not reflective of the real world. But FWIW I have no degree at all, yet had a very successful software engineering career - most of my colleagues assumed I had a PhD (as most of them did).

Comment by Dave Lindbergh (dave-lindbergh) on A sufficiently paranoid paperclip maximizer · 2022-08-08T15:31:46.762Z · LW · GW

My expectation for my future subjective experiences goes something vaguely like this.

(Since, after all, I can't experience worlds in which I'm dead.)

Comment by Dave Lindbergh (dave-lindbergh) on How do I use caffeine optimally? · 2022-06-22T19:23:50.513Z · LW · GW

Based solely on personal experience (N=1), don't exceed 2 cups/day.

More than that and I used to get headaches on weekends when I didn't drink it. At 2 or less cups/day, no problem.

Comment by Dave Lindbergh (dave-lindbergh) on Parable: The Bomb that doesn't Explode · 2022-06-20T17:23:46.373Z · LW · GW

Congratulations. Now when the bomb is attached the the CPU running the malevolent AI, the AI can hack the Pi and prevent the bomb from going off.

Sometimes the world needs dangerous things, like weapons.

If you don't want to build dangerous things, don't become a munitions engineer. (But be aware that someone else will take that role.)

If you are a munitions engineer, be a good one. Build a bomb that goes 'bang' reliably when it's supposed to, and not otherwise. Keep it simple.

Comment by Dave Lindbergh (dave-lindbergh) on Deconfusing Landauer's Principle · 2022-05-29T14:59:34.551Z · LW · GW

Very clear writeup - thank you for doing it. (I"m not sure if it says anything much about cognition either, but hey).

It would be great to see this incorporated in a Wikipedia article, but that's probably an uphill battle.

Comment by Dave Lindbergh (dave-lindbergh) on My Morality · 2022-05-15T16:47:20.526Z · LW · GW

Your two principle goals - maximize total utility and minimize utility inequality - are in conflict, as is well-known. (If for no other reason, because incentives matter.)  You can't have both.

A more reasonable goal would be Pareto efficiency-limited utility inequality.

Comment by Dave Lindbergh (dave-lindbergh) on EU Maximizing in a Gloomy World · 2022-04-27T15:03:06.098Z · LW · GW

Karl Popper said "optimisim is a duty", because only optimists work to improve the world. Those who think we're already doomed and that effort is hopeless, don't try.

Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success.

I think that's from The Open Society and its Enemies (1945).

https://mugwumpery.com/?p=746

Comment by Dave Lindbergh (dave-lindbergh) on Convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe · 2022-04-11T02:21:38.617Z · LW · GW

A desire to understand the arguments is admirable.

Wanting to actually be convinced that we are in fact doomed is a dereliction of duty.

Karl Popper wrote that

Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success.

Only those who believe success is possible will work to achieve it. This is what Popper meant by "optimism is a duty".

We are not doomed. We do face danger, but with effort and attention we may yet survive.

I am not as smart as most of the people who read this blog, nor am I an AI expert. But I am older than almost all of you. I've seen other predictions of doom, sincerely believed by people as smart as you, come and go. Ideology. Nuclear war. Resource exhaustion. Overpopulation. Environmental destruction. Nanotechnological grey goo. 

One of those may yet get us, but so far none has, which would surprise a lot of people I used to hang around with. As Edward Gibbon said, "however it may deserve respect for its usefulness and antiquity, [prediction of the end of the world] has not been found agreeable to experience."

One thing I've learned with time: Everything is more complicated than it seems. And prediction is difficult, especially about the future.

Comment by Dave Lindbergh (dave-lindbergh) on Cheerful Harberger Day · 2022-01-30T18:34:04.416Z · LW · GW

Re your postscript, Neal Stephenson said "gold is the corpse of value" (quoting banker Walter Wriston), expressing much the same thing.

Contrary to popular belief, rich people almost never hoard wealth. They invest it.

Comment by Dave Lindbergh (dave-lindbergh) on I Am a Dimensional Traveller from a World of Highly Sensitive Rationalists · 2022-01-30T18:14:22.535Z · LW · GW

"Commuication requires consent". There's a road sign on the highway that strongly, strongly, disagrees.

Comment by Dave Lindbergh (dave-lindbergh) on Believing in magic pyramids shows that you think differently · 2022-01-07T19:27:55.396Z · LW · GW

What is and isn't stupid to believe depends on the general state of knowledge.

Given what Newton's society knew at the time, alchemy wasn't stupid - not enough was known about the nature of matter to know that alchemy was hopeless.

150 years ago nobody had researched smoking - it had some obvious positive effects and no obvious bad effects. Nothing stupid about smoking, then. Almost the same can be said for seatbelts and asbestos (they had obvious pros and cons, of which the magnitudes were not understood).

Environmental destruction is not bad at small scales, nor is burning fossil fuels. It's scale that make them bad.

On the other hand, today we know enough about the way the universe works to say that pyramid power, crystal healing, and runes on A4 paper are stupid. As is your landlord.

Comment by Dave Lindbergh (dave-lindbergh) on Signaling isn't about signaling, it's about Goodhart · 2022-01-06T22:14:01.806Z · LW · GW

In competitive situations where there's lots of optimization experience this tends to be a good strategy. People have been selling and buying used cars for 100 years - all the tricks and counter-tricks have been worked out, and pretty much cancel each other out.

So by not making any special attempt to signal and just being honest, you save the costs of all that signaling. And the other party saves costs protecting themselves from the false signals. Putting you both ahead.

Comment by Dave Lindbergh (dave-lindbergh) on A good rational fiction about IT-inspired magic system? · 2021-12-27T17:07:34.749Z · LW · GW

As I recall most of the interest in in the first book in the series, "Wizards Bane" (1989). I don't think it's a spoiler under the circumstances - guy builds a spell generation VM using Forth (because he has no actual computers at hand, he needs to keep things really simple).

It's implied in later books that he bootstraps more complex systems from there.