Can Current AI-Driven Cars Generate True Random Paths? (or, Forever at the Mercy of the Horde)

post by Benjamin Bourlier · 2024-03-30T03:16:33.032Z · LW · GW · 3 comments

Contents

3 comments

There is no evidence requirement for the irrationality accusation which is “downvoting” on Less Wrong, sadly. 

This requirement would be easy to program. The accuser would click the “down” arrow, and a reasonably finite list of only the most common, and thus only the most easily avoidable fallacies would appear, and the accuser would have to select at least one. This selection would be visible to the OP, such that they have at least one explanation as to why they are being accused. Any “wrong-ness” too nuanced, too subtle to fall into these common fallacy categories would thus not be considered deserving of any accusation whatsoever. It would be “fair”, to be so mistaken. Making an easily identifiable “straw man argument”, though, or, say, an “appeal to ignorance” argument, would be downvoted along with those specific accusations, necessarily. Votes could be added to those specific accusations as to how many readers agree that a given argument is an example of, say, “a straw man fallacy”, or whatever it may be. 

This would make, of course, sense. This is how, say, a jury must deliberate a case. The jury is not allowed to be a mere black box of up/down output, because that would result in…well, that’s just it, what does it result in? What has it resulted in, repeatedly, throughout history? 

In any reasonable (rationality-practicing) community, accusations require evidence. Despite the LW community having devoted considerable effort to defining common fallacies, and, supposedly, how we might take care to avoid them the best we can, the idea I just described—which is basically just “law” versus “mob rule”—has either never occurred to the authority figures behind the site (unlikely, though, isn’t that?), or, this idea doesn’t appeal to the authority figures behind the site, as it would, of course, be a challenge to their authority (much more likely, that, isn’t it?) Anyone I can find on the site who has already suggested this idea has, predictably, been merely downvoted without explanation, and thus has been effectively, efficiently censored—again, exactly as I would predict they would be. 

So, you know, fuck this site, basically. This will be my last post. I’m certainly not going to waste my time crawling out of yet another hole I’ve been undeservedly thrown into by self-congratulatory authority figures for the mere reward of “social capital” (hint, hint, reader—you shouldn’t either, but should do something meaningful with your remaining time on earth, if you so choose). 

This final post will be gallows humor. A bitter joke, which is, yes, mocking what I see as the prevailing norms on this site and in contemporary society in general. To anyone reading who really “gets” the joke—and I expect very few will—I wish there was a community we could enjoy together, to really dig in to rationality practice with our remaining time on earth, but as far as I can tell, this is not it, nor is there any such thing available elsewhere, not on the internet, nor via in-person encounters. Meditation retreats are salvageable, in some cases, I think. 

Gallows humor is the last available form of rational therapy, though, as I see it. Here’s my joke:

I know some present-day cyclists/runners/motorcyclists use APIs playfully to generate random (or pseudo-random) routes, exploring new roads and paths, changing up their routines, all the better to celebrate and enjoy this beautiful world of ours. (“Ours”, as in, humans’—set-up joke, that.) 

But is there an AI-driven car on the market today where you can get in and say, “Anywhere”, and the car will generate a random path (or a path that’s acceptably pseudo-random) and drive until it runs out of fuel/energy—and calculates when it will run out of fuel/energy in generating the path, such that it doesn’t, say, come to a stop in the middle of the desert or the wilderness, leaving you stranded? Or maybe you can disable that safety restriction and truly take a gamble, if you’re so recklessly inclined? 

I’m just curious

I know the answer in theory should be yes, of course, that this is entirely possible to program. I could use existing APIs to generate a random self-driving route right now, obviously, but what I’m picturing is a path that is entirely unpredictable, humanly, at every step, not an arbitrary path that is generated to completion such that, once it’s generated, you know where you’re going. I’m very limited as a programmer, and even I could write some Python code right now, somewhat easily, that would generate a pseudo-random sequence of navigational commands with pseudo-random distances for me to follow in my self-driven car, where each command is actually a set of commands that you follow in order until one of them is possible in the given context—that is, if the code output says “Turn Left in 5 miles”, but this is impossible in context, you just hit enter or whatever, moving through the command list until a satisfiable command comes up, pulling off to the side of the road if necessary, and if you have to go through some huge number of commands before finding one that’s possible, which may take some ridiculous amount of time, then that’s just what happens, and you accept the aimless randomness (pseudo-randomness) of this, and you commit to following this process until you run out of fuel/energy.

I’m not going to actually do this, of course. That would be pointlessly annoying (and dangerous) in practice. I’m just curious

Say you give the “Anywhere” command to the AI-driven car, and the car might, say, drive as far away as possible in some randomly selected direction. Or it might, say, drive around in circles close to your starting point. Or it might, say, parallel park and leave and parallel park again over and over in the same parking spot until it runs out of fuel/energy, as someone waiting for the spot to open up behind you is meanwhile being driven crazy, shouting, “What the hell are you doing?!” etc. Whatever. Unpredictable output—except that traffic rules are followed. 

Sub-question: is repeatedly/pointlessly parallel parking in the same spot over and over like that illegal? To what extent are “random” or bizarre, aimless driving maneuvers even addressed in traffic laws? Obviously, this doesn’t come up often. Suppose you issue the “Anywhere” command, and the car drives you through several bank drive-thrus, looping through each drive-thru some unpredictable number of times. When you’ve appeared in a given drive-thru for the, say, dozenth time in a row or whatever, having no actual business at the bank, could the bank employees justifiably call the police on you for creating some kind of disturbance to the normal flow of drive-thru use, and could the police justifiably intervene into this bizarre behavior? Or are you/the AI-driven car free to do these kinds of aimless things? 

I’m just curious.

In “On the Road”, Kerouac pursues the idea of driving more or less aimlessly as a Buddhistic/Zen flow-state. I’m wondering if one could take a beatnik AI-trip. Before, you know, the technology kills us all or doesn’t. Say AI is indeed going to kill us all. Maybe we should, you know, take a weird road trip while we still can? Just a thought. Signing out. 

3 comments

Comments sorted by top scores.

comment by duck_master · 2024-03-30T20:28:23.484Z · LW(p) · GW(p)

To be fair, there is no evidence requirement for upvoting, either.

I could see why someone would want this (eg Reddit's upvote/downvote system seems to be terrible), but I think LW is small and homogenous-ish enough that it works okay here.

comment by Brendan Long (korin43) · 2024-03-30T18:13:15.262Z · LW(p) · GW(p)

I'm not downvoting because this was downvoted far enough, but downvoting doesn't mean you think the post has committed a logical fallacy. It means you want to see less of that on LessWrong. In this case, I would downvote because complaining about the voting system isn't interesting or novel.

comment by Jeremy Loizos (jeremy-loizos) · 2024-03-30T11:56:30.948Z · LW(p) · GW(p)

Humanity seems, historically to have tended toward believing that technology will eventually "save" them from whatever horrors reality may present and whatever awful futures seem inevitable. It's always been the case but, increasingly, technology is being considered as a threat to humanity, a crutch, or something alltogether evil. If we take the position that technology will eventually kill us, then we might as well use the resources we have left to go on one last gasser, right? And if it will save us we may as well use our remaining resources on a quest to find that saving tech, the final piece, right? Or maybe there's a way we can live with it, and allow it to live with us, meaning ai as the culmination of technology - we don't even have to think anymore, just say or gesture what we'd like and the machines will make it so. That, however, sounds quite dull. I think we must establish democratically established values in place to guide the development and implementation of technology - not the power and profit driven whims of governments and corporations. I suppose, as it stands now, tech won't kill us because if it did, we couldn't buy new tech or vote.