Terminology: <something>-ware for ML?

post by Oliver Sourbut · 2024-01-03T11:42:37.710Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    15 Shankar Sivarajan
    9 Odd anon
    4 Shiroe
    3 the gears to ascension
    3 Mikhail Samin
    3 drossbucket
    2 faul_sname
    1 bvbvbvbvbvbvbvbvbvbvbv
    1 Nathaniel Monson
None
6 comments

Would it be useful to have a term, analogous to 'hardware', 'software', 'wetware', 'vaporware' etc.[1], which could be used to distinguish learned/discovered components of software, like gradient-trained DNNs, prompt-hacked LLMs, etc?

EDIT 2024-01-04: my current favourites are 'ML-ware' [LW(p) · GW(p)] (HT Shankar), 'fuzzware' (me), and 'hunchware' (Claude), in that order; LW votes concur with 'ML-ware'.

In a lot of conversations with nonexperts, I find that the general notion of AI as being 'programmed' apparently still has a surprisingly strong grip, even after the rise of ML and DL made it even clearer that this is an unhelpful anchor to have. Thane recently expressed similar, quite strongly [LW · GW].

David Mannheim has a short take AI is not software [LW · GW] which I think nicely encapsulates some parts of the important distinctions.

The important thing, for me, is that, in contrast to traditional software, nobody wrote it, the specification is informal at best, and we can't (currently) explain why or how it works. Traditionally, software is 'data you can run', but traditionally this class of data were exclusively crafted (substantially) by human design.

A valid answer to this question is, 'no, we do not need such a term, just say, "learned components of software" or similar'.

In practice, we probably wouldn't apply this term to, say, a logistic regression, but maybe?

Some ideas, none of which I like enough yet

After a bit of back-and-forth, Claude managed to produce a few which I think are OK but I'm not very sold on these either


  1. For some illuminating compendia of -ware terms, see wiktionary, computerhope ware jargon, Everyware from rdrop, or gears' shortlist of suggestions [LW(p) · GW(p)]. Notably, almost all of these are really semantically <thing>-[soft]ware with the 'soft' elided e.g. spyware really means spy-software. ↩︎

Answers

answer by Shankar Sivarajan · 2024-01-03T19:02:10.806Z · LW(p) · GW(p)

Why not just "ML-ware"?

It's not specific to neural networks, corresponds closely to what most people would refer to as "AI" today, but explicitly excludes handcrafted algorithms. The resemblance to "malware" is serendipitous.

comment by Oliver Sourbut · 2024-01-03T21:56:51.493Z · LW(p) · GW(p)

This is simple but surprisingly good, for the reasons you said. It's also easy to say and write. Along with fuzz-, and hunch-, this is my favourite candidate so far.

answer by Odd anon · 2024-01-04T06:02:57.901Z · LW(p) · GW(p)

Brainware.

Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don't know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.

They're often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They're "mushy" and don't have clear lines, so much of their insides blur together.

AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.

This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.

comment by Oliver Sourbut · 2024-01-05T09:45:24.602Z · LW(p) · GW(p)

I wasn't eager on this, but your justification updated me a bit. I think the most important distinction is indeed the 'grown/evolved/trained/found, not crafted', and 'brainware' didn't immediately evoke that for me. But you're right, brains are inherently grown, they're very diverse, we can probe them but don't always/ever grok them (yet), structure is somewhat visible, somewhat opaque, they fit into a larger computational chassis but adapt to their harness somewhat, properties and abilities can be elicited by unexpected inputs, they exhibit various kinds of learning on various timescales, ...

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2024-01-10T22:43:35.462Z · LW(p) · GW(p)

Incidentally I noticed Yudkowsky uses 'brainware' in a few places (e.g. in conversation with Paul Christiano) [LW · GW]. But it looks like that's referring to something more analogous to 'architecture and learning algorithms', which I'd put more in the 'software' camp when in comes to the taxonomy I'm pointing at (the 'outer designer' is writing it deliberately).

answer by Shiroe · 2024-01-03T17:37:58.188Z · LW(p) · GW(p)

"tensorware" sprang to mind

comment by johnswentworth · 2024-01-03T17:41:38.849Z · LW(p) · GW(p)

This one independently sprang to mind for me too.

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2024-01-03T21:58:55.415Z · LW(p) · GW(p)

This is nice in its way, and has something going for it, but to me it's far too specific, while also missing the 'how we got this thing' aspect which (I think) is the main reason to emphasise the difference through terminology.

answer by the gears to ascension · 2024-01-05T08:55:56.715Z · LW(p) · GW(p)

because the goal here is to have a word that people skeptical of the "lifeyness" or "brainyness" of ai will accept to understand that it's not normal software, I really like "moldware" and will be using it until something sticks better. it nicely describes the general nature of function approximators without getting into the weeds of why or how, or claiming function approximators have inherent lifeyness. it also feels like the right amount of decrease in "firmness" after software.

more candidates to reject from, a few favorite picks from asking an llm to dump many suggestions: fit-; contour-; match-; mirror-; conform-; mimic-; map-; cast-; imprint-;

comment by Oliver Sourbut · 2024-01-05T09:39:41.569Z · LW(p) · GW(p)

Mold like fungus or mold like sculpt? I like this a bit, and I can imagine it might... grow on me. (yeuch)

Mold-as-in-sculpt has the benefit that it encompasses weirder stuff like prompt-wrangled and scaffolded stuff, and also kinda large-scale GOFAI-like things alla 'MCTS' and whatnot.

answer by Mikhail Samin · 2024-01-04T22:02:48.381Z · LW(p) · GW(p)

Groware/grownware? (Because it’s “grown”, as it’s now popular to describe)

answer by drossbucket · 2024-01-03T16:02:39.353Z · LW(p) · GW(p)

Oozeware?

answer by faul_sname · 2024-01-03T23:14:57.780Z · LW(p) · GW(p)
  • Gradientware? Seems verbose and isn't robust to other ML approaches to fit data.
  • Datagenicware? Captures the core of what makes them like that, but it's a mouthful.
  • Modelware? I don't love it
  • Puttyware? Aims to capture the "takes the shape of its surroundings" aspect, might be too abstract though. Also implies that it will take the shape of its current surroundings, rather than the ones it was built with
  • Resinware - maybe more evocative of the "was fit very closely to its particular surroundings", but still doesn't seem to capture quite what I want
answer by bvbvbvbvbvbvbvbvbvbvbv · 2024-01-10T07:33:43.130Z · LW(p) · GW(p)

I don't really like any of those ideas. I think it's really interesting that aware is so related though. I think the best bet would be based on software. So something like deepsoftware, nextsoftware, nextgenerationsoftware, enhancedsoftware, etc.

answer by Nathaniel Monson · 2024-01-03T23:12:38.529Z · LW(p) · GW(p)

I like "evolveware" myself.

comment by the gears to ascension (lahwran) · 2024-01-03T23:38:50.100Z · LW(p) · GW(p)

it's distinctly not evolved. gradients vs selection-crossover-mutate are very different algos.

Replies from: nathaniel-monson
comment by Nathaniel Monson (nathaniel-monson) · 2024-01-04T00:01:41.364Z · LW(p) · GW(p)

I agree in the narrow sense of different from bio-evolution, but I think it captures something tonally correct anyway.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-04T00:07:16.714Z · LW(p) · GW(p)

this has been an ongoing point of debate recently, and I think we can do much better than incorrect analogy to evolution.

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2024-01-04T11:32:01.877Z · LW(p) · GW(p)

I hate to wheel this out again [LW · GW] but evolution-broadly-construed is actually a very close fit for gradient methods. Agreed there's a whole lot of specifics in biological natural selection, and a whole lot of specifics in gradient-methods-as-practiced, but they are quite akin really.

Replies from: lahwran, Oliver Sourbut
comment by the gears to ascension (lahwran) · 2024-01-05T05:26:15.690Z · LW(p) · GW(p)

please wheel such things out every time they seem relevant until such time as someone finds a strong argument not to, people underrecommend sturdy work imo. in this case, I think the top comment on that post raises some issues with it that I'd like to see resolved before I'd feel like I could rely on it to be a sturdy generalization. but I appreciate the attempt.

comment by Oliver Sourbut · 2024-01-04T11:34:55.306Z · LW(p) · GW(p)

Separately, I'm not a fan of 'evolveware' or 'evoware' in particular, though I can't put my finger on exactly why. Possibly it's because of a connotation of ongoing evolution, which is sorta true in some cases but could be misleading as a signifier. Though the same criticism could be levelled against 'ML-ware', which I like more.

6 comments

Comments sorted by top scores.

comment by Ann (ann-brown) · 2024-01-03T13:49:58.214Z · LW(p) · GW(p)

Nebulaware ...

Hardware / software is a contrast between 'the physical object computer' and 'not the physical object computer' ... I do think that models are certainly 'not the physical object computer', and what we are actually distinguishing them from are 'programs'.

'Pro-graphein' etymology is 'before-write'. If we look for greek or latin roots that are instead something like 'after-write', in a similar contrast (we wrote the program to do the planned thing, we do the <x> to write the unplanned thing) we get options like 'metagram', 'postgram' ... unfortunately clashing with the instagram wordspace ... or 'postgraph'.

(Existing actual words with similar etymology to what we're looking for with this approach: Epigram, epigraph, metagraph - which arguably is weirdly close in meaning to what we want but would be confusing to override.)

Looking instead to 'code', going back to codex, caudex (tree trunk/stem)... this kind of still works, but let's go for a similar word - folium, folio ...

Alternately 'ramus'/'rami', branch, leading to 'ramification', seems a promising direction in a semantic sense. It has a lot of association with not explicitly planned developments and results. ('Ramagram' is kind of a silly possible word in English though. Then again, a lot of the AI development space has silly words.).

... More a starting point of ideas here than actually having dug up too many good-sounding words.

Replies from: ann-brown
comment by Ann (ann-brown) · 2024-01-03T14:19:26.471Z · LW(p) · GW(p)

Going a step forward into the etymology of 'program', it comes to mean 'write publicly' or 'written notice', which we could also contrast with roots meaning something else like 'idi-' from 'idios' for 'private, personal, one's own', or in fact 'privus' itself.  (Again need to keep clear of actual existing words like 'idiogram').

Replies from: Oliver Sourbut
comment by Oliver Sourbut · 2024-01-03T14:26:11.321Z · LW(p) · GW(p)

Nice! 'Idioware'? Risks sounding like 'idiotware'...

Replies from: ann-brown
comment by Ann (ann-brown) · 2024-01-03T14:32:21.735Z · LW(p) · GW(p)

'Idiomware'? Since idioms are expressions with a meaning that can't be deciphered from the individual words used, and AI models are data with a function that can't be easily deciphered from the specific code used?

comment by Oliver Sourbut · 2024-01-04T11:38:21.629Z · LW(p) · GW(p)

@the gears to ascension [LW · GW] , could you elaborate on what the ~25% react on 'hardware' in

Would it be useful to have a term, analogous to 'hardware', ...

means? Is it responding to the whole sentence, 'Would it be useful to have...?' or some other proposition?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-05T08:53:05.074Z · LW(p) · GW(p)

that was due to a bug in how lesswrong figures out what text a recorded react applies to. I'm not sure which react that was supposed to be, but my reacts weren't valuable enough, so I simply removed them.