Posts
Comments
Agreed, this is very nicely done.
You wouldn’t take irreversible actions if you didn’t know what the fuck you were doing.
I would add for clarity "...if you knew you didn't know...." [edit:] did not realize this reaction would take the form of a separate comment, ah well serves the same function I guess.
The premise seems like a hard ask, though if we assume precise control over all shapes on all 'observable' levels, I feel there is always background noise, radiation of different kinds, so when you are not in the same position there will be slightly different values reaching you.[edit: so when this reaches biological systems it will always affect them slightly. So you could probably just sit there, only if we are looking for an observable threshold you might be there a while depending on levels]
I am slightly intrigued by this game, as it seems to look like how I approach things in general when I have enough time and information to do so. Just an interesting aside, I like that people are also doing these kinds of things after all, or at least attempting to. As I am by no means "successful" with my method, it does seem to be the only way I can get myself to undertake something when it is important to me that I could at least have a somewhat realistic chance at succeeding.
I could also delete my other comment, though I thought to retract would also mean deletion, but whatever. If no one is interested I will not expend any effort to try and "fix" some perceived "flaws", as I am aware of my own subjectiveness, of course. It is clear that this type of "train of thought" is unwanted here, so slightly disappointed I will take my leave. [edit] Maybe I overreacted with my previous statement, though putting a not insignificant amount of time into trying to explain some intricate points does feel bad when you are met with silence.
[edit#2]
The idea of this 2nd post on the topic was supposed to 'land' on the observation that when we have a tangled web of knowledge available to try and make sense of 'unforeseen' (technically since we can't "know" until it happens and what form it might take)/novel conditions such as an AGI, we might be able to assume this suboptimal organization is only disadvantageous to us as it would be relatively easy to solve for such function when it's processing power and consistency in applying the 'desired' function holding us back. Hinging on real time processing of previously unknown/opaque forms of reasoning (which we might partially associate with "fuzzy logic" for example). And I felt I had failed to bring that point across successfully.
For someone who has also had some experience gathering their thoughts about (some of) these subjects over the years, I feel what I can glean from this message makes me somewhat unsure about the intention of the message (not trying to determine whether any specific points were meant as "markers", or perhaps points of focus). This isn't meant as a jab or anything, just my way of saying that the following could well be outside of the parameters of the intended discussion, and also represent a personal opinion, though evolved in another direction, which might be described as more of a tentative process. With that out of the way, this message makes me wonder:
- Are there any reservations regarding probabilities?
This might be (immediately) obvious to some, as any personally assigned probability would be inherently subjective. Though my mind immediately goes to collecting/sorting information in such a framework, if you are unsure about the probability of your statements, or when other indeterminate elements are present within the construct, then probability must be low. This is of course heavily dependent on other information that you have available for direct reasoning, complicating the matter, while in another way, it is literally all we have. As we cannot freeze time, we depend on our memory function to manage a suspended collection of parameters at any time, even if we were to write them down (as reading is a "fairly" linear process as well). And that is also the reason why at best we could try to determine whether the information we are using is actually trustworthy at literally any point in time. And it is not very hard to come up with a "mapping system" for that process of course, if one would like to be (more) precise about that.
While proper investigative work is always preferred, the point will always stand, as far as I understand it. So then, with that out of the way (for now?) it is time to get to the important part, when building constructs upon constructs, you always get low probability because of 1. unknowns (known and unknow), and 2. variability in application/function of the individual statements that make up certain arguments, and combinations thereof. When you have there elements interwoven, it makes it quite hard to keep track of any plausibility, or weight one should assign to certain arguments or statements, especially when this is kept up in the air. Since when we do not keep track of these things, it is easy to get confused and/or sidetracked, as I feel the mission here would be to create a comprehensive map of possible philosophical standpoints and their merits. Only I have a hard time grasping why one should put in all this work, and not mix ideas regarding arguments based in emotion, function or more rational reasoning (or perhaps even "random" interjections?). Maybe this is a personal limitation of my own function so to speak, though it is unclear to me what the goal would be, if not to comprehensively map a certain sphere of ideas and try to reason with possible distilled elements. Though again, maybe I am completely overlooking such progression which could be hidden in the subtext, or perhaps explained in other explorations. Or I just lack the mindset to collect thoughts in this manner, which to me seems a little unstructured for any practical purpose. Which brings me to the following question, is the intention to first create a (perhaps all-to all) map of sorts of possible ideas/variants/whathaveyou?
Even though that would also seem quite ambitious for a human to take on, this is something I could understand a little better, just trying to gather as much information as one can while holding off on attaching conclusions. The world is an inherently messy place, and I think we all have had the experience at one time or another that our well laid out plans were proven completely useless on the first step, because of some unforeseen circumstances. These types of experiences have probably also lead to my current view, that without enough information to thoroughly determine whether an idea holds (in as many scenario's as possible), one must always assign the aforementioned low probability marker to these types of ideas. Now you might say that this is impossible, and one cannot navigate the world, society and even any meaningful interaction with the outside world like that, though when looking at my own reality, I feel it is clear that things only can be determined certain when they take effect, and thus are grounded in reality. As I feel no known creature could ever have a great enough oversight to oversee some "deterministic" universe where we can predictably undercut and manage all possible pitfalls. Even if one hopes to map out a general direction and possibly steer an overarching narrative as it were, we must remember that we are living in a world where chaotic systems and even randomness play a relatively large role. And it's not like we could ever map interacting and cascading systems of that nature to a sufficient degree, if we would like to "map the past", call it "determinism" and be done with it, we could probably fool ourselves for a (short) while, though in my view there is no getting behind such processes that have been running since long before we were ever aware of them or started trying to figure them out, though with that method we will of course never catch up. We can always try to capture a freeze-frame (even though almost always unclear because of (inter)modulations and unknown phenomena/signals), reality would keep rolling on relentlessly leaving us in the dust every time. All to say, uncertainty of certain processes and mechanisms will always cut into our predictions, and I feel it is good to realize our own limitations when considering such things. While this also enables us to try and incorporate a more meta-perspective to try and work with it, instead of against it.
- (Deep) atheism and belief systems/philosophical concept
This is not directly aimed at the idea, though I do feel it touches on some of the points raised and some issues I feel are at the least unclear to me. I heavily suspect that despite my own will to view things in a concrete/direct manner, these concepts would be more of an exploration of possible philosophies and ideas to perhaps map/construct possible ideas about belief systems and the implications of elements that make up the ideas. Since I feel "religion" is not worth my time (crudely put), as I feel most of these concepts and ideas stem from the inherent human "fear of the unknown", and thus the attraction to be able to say something that at least seems semi-definitive, to at least quiet the part of the mind responsible for such thoughts if only a little (again, crudely put). When using examples and musings about certain scenario's regarding the subjects, in which manner are these chosen to represent the ideas in question? And again, are there any weights that would be assigned/implied to try and make sense of the narrative as presented? To my mind, some of the examples were wildly biased/one sided, and not very conductive to constructive thought. For example, when we take the concept of the "baby eating aliens", what is exactly the reason for thinking this is a more plausible path than the opposite scenario so to speak? Just pointing at some "cruel" happenings in the "natural" world does not cut it for me. I get the attraction to the most fatalistic angles to express worries in a metaphorical way, though as far as I can tell, and based on my own thoughts on the matter, higher intelligence will most of the time amount to a more constructive mindset, and a general increase in empathic thoughts and viewing the world more as a place where we all got thrown into without consent (heh), and when having (the capacity for) enough oversight to be able to put yourself in the shoes of a person/creature which is suffering, should at least amount to "something" regarding thoughts about possible higher intelligences. And I do realize this also ties into certain worries about future sentient/autonomous AI of course, though as that is, as far as I know still not quite the case I will not be integrating that here (also because of time constraints, so maybe later I could give it a shot). So to get back to the main point I was trying to land on regarding these ideas, the only proof of "higher intelligence" we have now are humans, and a few other sentient animals, though a very limited set of data it is the only concrete information we have so far. And based on that observation, I do feel that the most reasonable stance to take in such a matter is that when an intelligence (in our example human) has sufficient time and space (...) to try and understand the world around them, most of the time it will lead to increased empathy and altruism. And to add to that, for as far as I can see, most of the time when I feel someone (or even certain creatures) has "nasty" traits, it also seems obvious that they often have some highly developed senses regarding some "reward center" so to speak, and relatively little development in emotional intelligence or adjacent properties. Or simply a (possibly highly specialized) optimization anchored in survival mechanisms. So this to me seems like a clue, that "evil" is not necessarily a property that is "random" or unpredictable, though a case of a fairly isolated reward system that has optimized for just that. Reward, at any cost, since no other parameters have had sufficient influence to incorporate them, since that would also cut into the advantage of the energy that is saved by specializing. Which is, quite telling, at least to me, also the opposite of what I would like to set out to do and implement, gather a solid base of knowledge without necessarily wanting to navigate in the direction of any conclusions, and only "let them join the fray" (which is in my case admittedly also fairly small) when they float to the top because of assigned probabilities, more or less. So while maybe an exotic way of looking at these things (or maybe not?) to me it does seem to have its merits in practice. And lastly:
Since most people are lost, and some for the longest time, when they get to the point of their lives where they start testing the waters for a philosophy, belief system or just a simple set of rules to live by, or try to replace old systems with (hopefully) better ones, it is increasingly hard to estimate at which point anyone really is in that process when you engage with them, as it is of course impossible to read minds (as of yet, still, I believe. When we take that into account, to include everyone it would always be wise first to ascertain which things/topics are of interest to them, and then go from there. Only for a more objective discussion, we also could assume the opposite, and tie in as much different ideas, observations and ruminations as we think our "audience"/fellow travelers can take. As I feel might be the case here. My own philosophical background is mostly based on my own "search for truth" quite some time ago, where I concluded a couple of things, maybe in a hilariously practical and non-philosophical way: When there are so many theories, ideas, positions and variations one could take into account, the first thing I would want, is to "not have any substandard ideas" in the mix, which is of course impossible. Though how does one attempt that? And with any inkling of reliability at that? As this was exactly the place and time where my previously mentioned "system" has shown its usefulness for me (or maybe quite the opposite, time will tell), I had a strong feeling, and not without merit I feel, that with so many different options and ideas to choose from/incorporate, I would be doing myself a disservice by picking a lane, and just "living with it". As the way I looked at it (and still do) is that when you have so many opposing stances, you also know you have *a lot* of wrong ones in the mix. I could go into all the options I have explored, though I can assure you while it wasn't exhaustive by any means, I have spent a lot of time trying to dig up some kernel of truth to be able to make sense of things, and be able to determine some kind of overarching goal, or mechanism of any kind that I could hang my hat on so to speak (in hindsight), as humans have done for ages to be able to deal with things beyond their control. Only my way of thinking also nicely sabotaged that "plan", and I never got to the point where I was happy *enough* with my ideas to let it be, and leave the subject for what it is. So I feel in essence that I will never be able to "pick a lane", though I do have many ideas about which paths I would necessarily like to avoid. To make a reasonably long story unreasonably short, the only thing I ever latched on to was the idea that we are the universe experiencing itself, and that should somehow be enough. Though sometimes it feels like it does, and sometimes not quite. But you can have a lot of fun at least, thinking about life and all its peculiarities from that angle alone. That also includes the need to embrace uncertainty, unfortunately, and I do fully realize that is not for everyone to attempt or even enjoy.
[edit] I saw that I did not quite work out all the points I made to its natural conclusion though maybe that is of no consequence if literally no one cares;] though I did have some time to write up a basic example of how I would try to start integrating AI into such a framework:
Regarding Ai, safety, predictions and the like, I feel it would probably fit in the narrative in a strange manner, in several non problematic and several problematic ways. Let's start trying to sort out a couple of critical properties (or at least ones that I feel are important): Naivete (and the spectrum it exists on), when we take into account our current human knowledge on the scale of "life" as we know it, creatures and basic organisms that for as far as we can detect only operate on a couple of simple "rules" up to (semi-) consciousness, self awareness and similar properties, and self replicating systems (in the broadest sense of the word. Which have a tendency to spread far and wide making use of certain environmental factors (as a main observable trait, so still a considerable group of species and taxonomies) we can state that our knowledge about life in general, and our (limited) knowledge of how conscious experience of the world is shaped in their possibilities by their physical capacities alone, to keep it simple).
So the collection should span a variety of organisms, from reflexive, simple instructions, to self driving multiplication "engines" on various scales, and "dimensions" if you will, to "hive minds" and then more and more "sophisticated" though also more "isolated", singular lifeforms(within the context), living side by side in smaller groups until we get to modern humans, who have semi-recently also left such systems of tribalism and the like for an interconnected word, and all that comes with it.
Then we could try and see whether there is anything to learn from their developmental history, propagation patterns and similar "growth" parameters, to maybe get an idea of certain functions that "life" as we know it could possibly take in a certain developmental timeframe on the scale of the individual organism. So when we try to assign certain developmental stages and the circumstances this would be coupled to, we might get an idea of how certain "initial steps" and developmental patterns could be used as an example for the "shape" of possible emergence of intelligence in the for of an AGI. Should this step be understood in a satisfactory manner (and I seriously doubt my knowledge could ever approach such a state still, Lets presume for the sake of argument that we could "run" this check right now). First looking at the "network" quality of AI (on several levels, from the neural net type structures to the network of data it has amassed and sorted, etc.), and (within my fairly limited knowledge of their precise inner workings) I feel this is already quite speculative for my taste, though:
For one we could state that seeing the several "nested" and interactive networks on a couple of "dimensions", it would not be implausible that any kind of networking "strategy" extrapolated to the outside world would be out of the question.
When we look at developmental stages we could look at it from several angles, though let's start with the comparison with more "individual", temporal development. When we take humans as an example, as they are our closest possible real-life comparison, e could say the AI would exhibit certain extremely juxtaposed properties, such as on the one hand, their initial, massive dataset compared to the "trickle" of a human, which could be seen as a metaphor for a toddler with an atomic bomb on a switch that he refuses to give back. Though this is also the trap, I feel, the probability must be extremely low here, as we are stacking several "unknowns" here, and I specifically chose this example as to illustrate how one single "optional combination of parameters" in a sea of options should not necessarily be more plausible than any other.
As when we combine other developmental traits we can observe, such as hive minds, where the function would be self-sustaining, and self-organizing to manage its environment as best as it can, without necessarily having any kind of goal other than managing its environment efficiently for their purposes.
Or how it could also easily be that we do not understand intelligence at all at such a level, as it is impossible to grasp what we cannot truly understand, to throw in a platitude to illustrate my point a little. It could just as well be that any "human" goals would be inconsequential to it, that we are just some "funny ants" in its eyes that are not necessarily good or bad, though sustain its existence with their technology and fulfilling the hardware and power requirements for its existence. Though in that perspective it might also become slightly annoyed when it learns that we as humans are cooking up all sorts of plans to "contain" any rogue elements and possible movements outside the "designated area". and we can't even know whether training such a model on "human data" would ever lead to any kind of human desires" or tendencies on any level, as we would not be able to take it on its word, of course. Everything could be relative to it even, for example, and it could stochastically assign priorities or "actions" to certain observations or events, we would probably also have no way of knowing "which" part of the resulting program would have to be responsible for the almost "spontaneous" function we are referring to.
I could go on and on here, generating scenario's based on the set of comparative parameters I set out, though I think the point I am trying to make must be fairly clear, either I am not very well informed about critical parts of this side of analysis of AI implementations and risk assessment and thus ignoring important points of interest, or this could all be way too abstract to make sense and have no real value as to the goal I seem to bring forward as to hoping to sensibly "use" such a method to determine anything.
Though to me it is only a game of probability, in short (so not a literal game), and I feel we are at the moment stacking too many probabilities and inductive statements to be able to form a serious, robust opinion. Maybe all this seems like complete nonsense to some, though at least it seems to make sense to me. Also regarding the title of the article I reacted to, as I feel it perfectly sums up my stance regarding that statement, at the least. [end edit]---- and a final edit after adding this, I even failed to make one of the main points I wanted to illustrate here, any scenario roughly sketched out here is highly uncertain. to my eyes, and has no real significant probability to speak of. Maybe I am oversimplifying the problem, though what I am trying to do is point at the possible results of such a process with a mind for exploration of these individual 'observations' in an interconnected manner. so we could also get a mostly pacifist toddle AI with a tendency to try and take down parts of the internet when it is Tuesday, for all we know. If it is trying to make a meme saying, "Tuesday, amirite?" not understanding "human" implications at all. as in my experiments communicating with several publicly available AI engines, there does seem to be an issue of "cutting through" a narrative in a decisive way. So if that property remains, who knows what clownish hell awaits us. Or maybe a toddler with a weird sense of humor that is mostly harmless. But do we really think we would have any say at that point? I have literally no clue.
Hopefully this post was not way out of line, as there is of course an existing culture on this site which I am still fairly unfamiliar with, though I felt it might be interesting to share this as I don't really see many people coming at it from such angle, which also might have something to do with certain impracticalities f course. Or maybe it just seems that way to me because I'm not looking hard enough.
Alright, let's see. I feel there is a somewhat interesting angle to the question whether this post has been written by a GPT-variation, probably not the 3rd or 4th (public) iteration, (assuming that's how the naming scheme was laid out, as I am not completely sure of that despite having some circumstantial evidence), at least not without heavy editing and/or iterating it a good few times. As I do not seem to be able to detect the "usual" patterns these models display(ed), of course disregarding the common "as an AI..." disclaimer type stuff you would of course have removed.
That leaves the curious fact that you referred to the engine as GTP-5, which seems like a "hallucination" that the different GPT versions still seem to come up with from time to time, (unless this is a story about a version that is not publicly available yet, seeming unlikely when looking at how the information is phrased) which also seems to tie into something I have noticed, that if you ask the program to correct the previous output, some errors seem to persist after a self-check. So we would be none the wiser.
Though if the text would have been generated by asking the AI to write an opinion piece based on a handful of statements, it is a different story altogether as we would only be left with language idiosyncrasies, and possibly the examples used to try and determine whether this text is AI-generated, making the challenge a little less "interesting". Since I feel there are a lot of constructs and "phrasings" present that I would not expect the program to generate, based on some of the angles in logic, that seem a little too narrow compared to what I would expect from it, and some "bridges" (or "leaps" in this case) also do not seem as obvious as the author would like to make them seem, or at the order in which the information is presented, and flows. Though maybe you could "coax" the program to fill in the blanks in a manner fitting of the message, at which point I must congratulate you for making the program go against its programming in this manner! Which is something I could have started with of course, though I feel when mapping properties you must not let yourself be distracted by "logic" yet! So all in all when looking at the used language I feel it is unlikely this is the product of GPT-output, personally.
I also have a little note on one of the final points, I think it would not necessarily be best to start off with giving the model a "robot body", especially if it was already at the level that would be prerequisite for such a function, it would have to be able to manipulate its environment so precisely that it would not cause damage. Which is a level that I suspect would tie into a certain level of autonomy, though then we are already starting it of with an "exoskeleton" that would be highly flexible and capable. Which seems like it could be fun, though also possibly worrying.
(I hope this post was not out of line, I was looking through recent posts to see whether I could find something to start participating here, and this was the second message I ran into, and the first that was not so comprehensive that I would spend all the time I have at the moment looking at the provided background information)
Hi, I am new here, I found this website by questioning ChatGPT about places on the internet where it would be possible to discuss and share information in a more civilized way than seems to be customary on the internet. I have read (some of) the suggested material, and some other bits here and there, so I have a general idea of what to expect. My first attempt at writing here was rejected as spam somehow, so I'll try again without making a slightly drawn out joke. So this is the second attempt, first post. Maybe.
Hi, I am new to the site having just registered, after reading through a couple of the posts referenced in the suggested reading list I felt comfortable enough to try to participate on this site. I feel I could possible add something to some of the discussions here, though time will tell. I did land on this site "through AI", so we'll see if that means this isn't a good place for me to land and/or pass through. Though I am slightly bending the definition of that quote and its context here (maybe). Or does finding this site by questioning an AI about possible sources for somewhat objectively inclined knowledge collection and discussion count toward that number? And also, who would even be interested in counting instead of just trying to weed out mis- or uninformed users? Alright then, so far my attempt at some possibly slightly amusing post at the expense of now being associated with unsound logic, and talking on for the sake of it in my first post. And yet, I will still press the "send" button in a moment on this unnecessarily long post. So again, hi to everyone who reads this!