post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by sanxiyn · 2022-12-23T04:02:23.400Z · LW(p) · GW(p)

No doubt the earliest pioneers of computer science, emerging from the (relatively) primitive cave of electrical engineering, stridently believed that all future computer scientists would need to command a deep understanding of semiconductors, binary arithmetic, and microprocessor design to understand software.

This is disappointing, although not unexpected. Computer scientists, in general, are tragically bad at the history of their own field, although all of science is like that and it is not specific to computer science. Compare Alan Turing, who wrote in 1945, before there was any actual computer to program:

This process of constructing instruction tables should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.

That is, the earliest pioneers of computer science thought about automating programming tasks, before any of actual programming. As far as I know zero actual pioneers of computer science believed programmers would need to understand hardware design to understand software. It would be especially unlike for Alan Turing, since the big deal about Turing machine was that universal Turing machine can be built such that underlying computing substrate can be ignored. Misconceptions like this seem specific to people like Matt Welsh who came after the field was born and didn't bother to study the history of how the field was born.

comment by Vladimir_Nesov · 2022-12-23T00:30:41.158Z · LW(p) · GW(p)

A lot of things humans do are AGI-complete, they won't be automated in a business-as-usual world, before everything else changes too, around the same time. There is no timeline where some of them happen 4 years from now, while others 7 years after that. Possibly even good self-driving cars are AGI-complete, but certainly programming.

Thus there is no straightforward fire alarm: anything that actually happens before AGI is not AGI-complete, and so doesn't directly demonstrate the possibility of an AGI; and anything that is AGI-complete won't work before AGI.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2022-12-23T14:59:54.108Z · LW(p) · GW(p)

I made a bit of a mistake in my wording of the post. I wrote:

I think the rate of advancement of LLMs indicates that this is possible in the near-term, <5 years, and could result in significant financial problems

I accidentally used weasel words here, which has become a force of habit due to the style of writing required for my job. I meant to introduce the possibility as a serious risk that's obviously worth considering, not make a claim that it would probably (>50%) happen very soon. My use of the words "I think", "indicates", "this is possible", and "could result" were a bad contrast with the precise numbers that I put immediately afterwards, and this was entirely my mistake.

My concern here was that it's possible that with a little bit of real effort, someone on LW could easily forecast that the labor market in the bay area is headed for a nightmare scenario, which has significant implications for humanity's ~300 AI safety researchers [EA · GW] who are largely located inside the bay area economy. This was based entirely on recent LLM advancements, not AGI timelines or AGI indicators.

comment by jaspax · 2022-12-23T03:04:36.834Z · LW(p) · GW(p)

Programming has already been automated several times. First off, as indicated above, it was automated by moving from actual electronics into machine code. And then machine code was automated by compilers, and then most of the manual busywork of compiled languages was automated by the higher-level languages with GC, OO, and various other acronyms.

In other words, I fully expect that LLM-driven tools for code generation will become a standard and necessary part of the software developers toolkit. But I highly doubt that software development itself will be obsoleted; rather, it will move up to the next level of abstraction and continue from there.

Replies from: stephen-mcaleese, Bezzi
comment by Stephen McAleese (stephen-mcaleese) · 2022-12-23T23:44:12.893Z · LW(p) · GW(p)

I'm not sure about software engineering as a whole but can I see AI making programming obsolete.

it will move up to the next level of abstraction and continue from there

My worry is that the next level of abstraction above Python is plain english and that anyone will be able to write programs just by asking "Write an app that does X" except they'll ask the AI that instead of asking a freelance developer.

The historical trend has been that programming becomes easier. But maybe programming will become so easy that everyone can do programming and programmers won't be needed anymore.

A historical analogy is search which used to be a skilled job that was done by librarians and involved creating logical queries using keywords (e.g. 'house' AND 'car'). Now natural language language search makes it possible for anyone to use Google and we don't need librarians for search anymore.

The same could happen to programming. Like librarians for search, it seems like programmers are a middleman between the user requesting a feature and the finished software. Historically programming computers has been too difficult for average people but that might not be true for long.

Replies from: Bezzi
comment by Bezzi · 2022-12-24T10:21:23.008Z · LW(p) · GW(p)

Unless we are assuming truly awesome models able to flawlessly write full-fledged apps of arbitrary complexity without any human editing, I think that you are underestimating how bad the average person is at programming. "Being able to correctly describe an algorithm in plain english" is not a common skill. Even being able to correctly describe a problem is not so common, because the average person doesn't even know what a programming variable is.

I've been in Computer Science classrooms, and even the typical CS student often makes huge mistakes while writing pseudo-codes on paper (which are basically programs in plain english). This has nothing to do with knowing Python syntax, those people are bad at abstract reasoning, and I am quite skeptical that a LLM could do all the abstract reasoning for them.

comment by Bezzi · 2022-12-23T11:38:22.109Z · LW(p) · GW(p)

Strong upvote. I can sort of expect a future where the developer does not need to know C or Python or whatever programming language anymore, and can happily develop very human-readable pseudo-code at a super high level of abstraction. But a future where the developer does not know any algorithm even in theory and just throws LLMs at everything seems just plain stupid. You don't set up gigantic models if your problem admits a simple linear algorithm.

Replies from: sanxiyn
comment by sanxiyn · 2022-12-23T12:16:15.279Z · LW(p) · GW(p)

That doesn't seem to match history. People gladly do expensive hash table name lookup (as in Python object) even if simple addition (as in C struct) is sufficient. Of course people will setup gigantic models even if the problem admits a simple linear algorithm.

comment by Jon Garcia · 2022-12-22T23:40:24.486Z · LW(p) · GW(p)

Well, I very much doubt that the entire programming world will get access to a four-quintillion-parameter code-generating model within five years. However, I do foresee the descendants of OpenAI Codex getting much more powerful and much more used within that timeframe. After all, Transformers just came out only five years ago, and they've definitely come a long way since.

Human culture changes more slowly than AI technology, though, so I expect businesses to begin adopting such models only with great trepidation at first. Programmers will almost certainly need to stick around for verification and validation of generated code for quite some time. More output will be expected out of programmers, for sure, as the technology is adopted, but that probably won't lead to the elimination of jobs themselves, just as the cotton gin didn't lead to the end of slavery and the rise of automation didn't lead to the rise of leisure time.

Eventually though, yes, code generation will be almost universally automated, at least once everyone is comfortable with automated code verification and validation. However, I wouldn't expect that cultural shift to be complete until at least the early 2030's. That's not to say we aren't in fact running out of time, of course.

Replies from: sanxiyn, lahwran
comment by sanxiyn · 2022-12-23T04:07:37.706Z · LW(p) · GW(p)

Code generation will be almost universally automated

I must note that code generation is already almost universally automated: practically nobody writes assembly, they are almost always generated by compilers, but no, compilers didn't end the programming.

Replies from: Jon Garcia
comment by Jon Garcia · 2022-12-23T05:33:31.401Z · LW(p) · GW(p)

By "code generating being automated," I mean that humans will program using natural human language, without having to think about the particulars of data structures and algorithms (or syntax). A good enough LLM can handle all of that stuff itself, although it might ask the human to verify if the resulting program functions as expected.

Maybe the models will be trained to look for edge cases that technically do what the humans asked for but seem to violate the overall intent of the program. In other words, situations where the program follows the letter of the law (i.e., the program specifications) but not the spirit of the law.

Come to think of it, if you could get a LLM to look for such edge cases robustly, it might be able to help RL systems avoid Goodharting, steering the agent to follow the intuitive intent behind a given utility function.

comment by the gears to ascension (lahwran) · 2022-12-23T00:23:35.622Z · LW(p) · GW(p)

four-quintillion-parameter code-generating model

yeah, that's probably still another 7 years out by my estimate,

just as the cotton gin didn't lead to the end of slavery and the rise of automation didn't lead to the rise of leisure time

yeah I mean, I don't think anyone would reasonably expect it to with the current ratio of who gets gains from trade

half joking on both counts, though I could probably think through and make a less joking version that has a lot more caveats; obviously neither statement is exactly true as stated

comment by Tomás B. (Bjartur Tómas) · 2022-12-23T03:24:51.317Z · LW(p) · GW(p)

Still at least good recreation.