Posts
Comments
I'd add that correctness often is security: job poorly done is an opportunity for hacker to subvert your system, make your poor job into great job for himself.
Have you played something like Slay the spire? Or Mechabellum that is popular right now? Deck builders don't require coordination at all but demands understanding of tradeoffs and managing risks. If anything those skills are neglected parts of intelligence. And how high is barrier of entry to something like Super Auto Pets?
Heard that boxfan works best if placed some distance from window, not in window.
I remember reading about zoologist couple that tried to rise their child together with baby gorilla. Gorilla development stopped at certain age and that stopped human development so they had to be separated.
Example1: Timmie gets cookies and I am glad he did even if I am dead and cant see him.
Example2: Timmie sucked into blackhole but I don't care as I cant see him.
Today is raining and asking "why?" is a mistake because I am already experiencing rain right now, and counterfactual me isn't me?
It seems to me explanation is confused in such a way as to obscure decision making process of which questions are useful to consider.
I would go so far as to say that the vast majority of potential production, and potential value, gets sacrificed on this alter, once one includes opportunities missed.
altar? same here:
A ton of its potential usefulness was sacrificed on the alter of its short-term outer alignment.
If some process in my brain is conscious despite not being part of my consciousness, it matters too! While I don't expect it to be the case, I think there is bias against even considering such possibility.
Not long ago I often heard that AI is "beyond horizon". I agreed to that while recognising how close horizon become. Technological horizon isn't fixed time into future and not just property of (~unchanging) people but also available tech.
Now I hear "it takes a lot of time and effort" but again it does not have to mean "a lot of time and effort in my subjective view". It can take a lot of time and effort and at the same time be done in a blink of an eye - but my eye, but not eye of whoever doing it. Lot of time and effort don't have to subjectively feel like "lot of time and effort" to me.
Got Mega Shark Versus Giant Octopus vibes from this, but your prompts was much more sane, which made things feel inconsistent. Still somewhat enjoyed it.
Hyperloop? I am not sold on his talent being "find good things to do" as opposed to "successfully do things". And second has a lot to do with energy/drive, not only intellect. Hence I expect his intelligence be overestimated. But I agree with your estimate, which is not what I expected?
It is important to remember that our ultimate goal is survival. If someone builds a system that may not meet the strict definition of AGI but still poses a significant threat to us, then the terminology itself becomes less relevant. In such cases, employing a 'taboo-your-words' approach can be beneficial.
Now lets think of intelligence as "pattern recognition". It is not all that intelligence is, but it is big chunk of it and it is concrete thing we can point to and reason about while many other bits are not even known.[1]
In that case GI is general/meta/deep pattern recognition. Patterns about patterns and patterns that apply to many practical cases, something like that.
Obvious thing to note here: ability to solve problems can be based on a large number of shallow patterns or small number of deep patterns. We are pretty sure that significant part of LLM capabilities is shallow pattern case, but there are hints of at least some deep patterns appearing.
And I think that points to some answers: LLM appear intelligent by sheer amount of shallow patterns. But for system to be dangerous, number of required shallow patterns is so large that it is essentially impossible to achieve. So we can meaningfully say it is not dangerous, it is not AGI... Except, as mentioned earlier there seem to be some deep patterns emerging. And we don't know how many. As for the pre-home-computer era researchers, I bet they could not imagine amount of shallow patterns that can be put into system.
I hope this provided at least some idea how to approach some of your questions, but of course in reality it is much more complicated, there is no sharp distinction between shallow and deep patterns and there are other aspects of intelligence. For me at least it is surprising that it is possible to get GPT-3.5 with seemingly relatively shallow patterns, so I myself "could not imagine amount of shallow patterns that can be put into system"
- ^
I tried Chat GPT on this paragraph, like the result but felt too long:
Intelligence indeed encompasses more than just pattern recognition, but it is a significant component that we can readily identify and discuss. Pattern recognition involves the ability to identify and understand recurring structures, relationships, and regularities within data or information. By recognizing patterns, we can make predictions, draw conclusions, and solve problems.
While there are other aspects of intelligence beyond pattern recognition, such as creativity, critical thinking, and adaptability, they might be more challenging to define precisely. Pattern recognition provides a tangible starting point for reasoning about intelligence.
If we consider problem-solving as a defining characteristic of intelligence, it aligns well with pattern recognition. Problem-solving often requires identifying patterns within the problem space, recognizing similarities to previously encountered problems, and applying appropriate strategies and solutions.
However, it's important to acknowledge that intelligence is a complex and multifaceted concept, and there are still many unknowns about its nature and mechanisms. Exploring various dimensions of intelligence beyond pattern recognition can contribute to a more comprehensive understanding.
So I'm not super confident in this, but: I don't have a model which suggests descaling a kettle (similar to mine) will have particularly noticeable effects. If it gets sufficiently scaled it'll self-destruct, but I expect this to be very rare. (I don't have a model for how much scaling it takes to reach that point, it's just that I don't remember ever hearing a friend, family member or reddit commenter say they destroyed their kettle by not descaling.)
Scale falls off by itself, so it is not really possible to self-destruct kettle under normal circumstances. For my 30l water heater, it took more than 5l of scale to self destruct, just no way to not notice spalled scale on such scale.
Read a bit about interaction between gas ethanol and water, fascinating!
Bullshit. I don't believe it. Gas do not turn into water. I am sorry, but somebody "borrowing" your gas and returning water is more likely explanation. (No special knowledge here, tell me I am wrong!)
Here he touched on this ("Large language models" timestamp in video description), and maybe somewhere else in video, cant seem to find it. It is much better to get it directly from him but it is 4h long so...
My attempt of summary with a bit of inference so take with dose of salt:
There is some "core" of intelligence which he expected to be relatively hard to find by experimentation (but more components than he expected are already found by experimentation/gradient descent so this is partially wrong and he afraid maybe completely wrong).
He was thinking that without full "core" intelligence is non-functional - GPT4 falsified this. It is more functional than he expected, enough to produce mess that can be perceived as human level, but not really. Probably us thinking of GPT4 as being on human level is bias? So GPT4 have impressive pieces but they don't work in unison with each other?
This is how my (mis)interpretation of his words looks like, last parts I am least certain about. (I wonder, can it be that GPT4 already have all "core" components but just stupid, barely intelligent enough to look impressive because of training?)
That is novel (and, in my opinion potentially important/scary) capability of GPT4. You can look at A_Posthuman comment below for details. I do expect it to work on chess, be interested if proven wrong. You mentioned chatGPT but it can't do reflection on usable level. To be fair I don't know if GPT4 capabilities are on useful level/only tweak away right now, and how far they can be pushed if they are (as in if it can self-improve to ASI), but for solving "curse" problem even weak reflection capabilities should suffice.
then have humans write the corrections to the continuations
Don't have to be humans any more, GTP4 can do this to itself.
I am not AI researcher. (I have a feeling you may have mistaken me for one?)
I don't expect much gain after first iteration of reflection (don't know if it was attempted). When calling it recursive I was referring to Alpha Go Zero style of distillation and amplification: We have model producing Q->A, reflect on A to get A' and update model in direction Q->A'. We got to the state where A' is better than A, before if tried result probably would be distilled stupidity instead of distilled intelligence.
Such process, in my opinion, is significant step in direction of "capable of building new, better AI" worth explicitly noticing before we take such capability for granted.
Will you keep getting better results in the future as a result of having this done in the past?
Only if you apply "distillation and amplification" part, and I hope if you go too hard in absence of some kind of reality anchoring it may go off the rails and result in distilled weirdness. And hopefully you need bigger model anyway.
For example, if I want to find the maxima of a function, it doesn't matter if I use conjugate gradient descent or Newton's method or interpolation methods or whatever, they will tend to find the same maxima assuming they are looking at the same function.
Trying to channel my internal Eliezer:
It is painfully obvious that we are not the pinnacle of efficient intelligence. If evolution is to run more optimisation on us, we will become more efficient... and lose the important parts that matter to us and of no consequence to evolution. So yes, we end up being same aliens thing as AI.
Thing that makes us us is bug. So you have to hope gradient descent makes exactly same mistake evolution did, but there are a lot of possible mistakes.
One of arguments/intuition pumps in Chinese room experiment is to make human inside the room to remember room content instead.
Excellent example that makes me notice my bias: I feel that only one person can occupy my head. Same bias makes people believe augmentation is safe solution.
As I understood it ultrasonic disruption was feature as it allows to target specific brain area.
Putting aside obvious trust issues how close is current technology to correcting things like undesired emotional states?
I remember reading about technology where retrovirus injected in bloodstream, that is capable of modifying receptiveness of braincells to certain artificial chemical. Virus cant pass brain barrier unless ultrasonic vibrations applied to selected brain area making it temporary passable there. This allows for making normally ~inert chemical into targeted drug. Probably nowhere close to applying it to humans - approval process will be nightmare.
It seems to me you confused by overlap in meaning of word "state".
In this context, it is "state of target of inquiry" - water either changes its solid form by melting or not. So state refers to difference between "yes, waters changes its solid form by melting" and "no, waters does not change its solid form by melting". Those are your 2 possible states, and water itself having unrelated set of states(solid/liquid/gas) to be in is just coincidence.
Instead of making up bullshit jobs or UBI ppl should be paid for receiving education. You can argue it is specific kind of bullshit job, but i think a lot of ppl here have stereotype of education being something you pay for.
Was thinking ppl should be paid for receiving education (may be sport kind of education/training) instead of UBI.
Q: 3,2,1 A: 4,2,1 in second set, can you retry while fixing this? or it was intentional?
There is obvious class of predictions like killing own family or yourself and such prediction are good example of what absence of free will feels from inside. There are things I care about, and I am not free to throw them away.
But genome have proxies for most thing it wants to control, so maybe it is the other way around? Instead of extracting information about concept, genome provides crystallization centre (using proxies) around which concept forms?
My friend commented that there is surge of cars with Russians plate numbers, and apparently some Russians from Osh (our second largest city) came back also. Surprised to see such a vivid confirmation.
Hope you and your family have taken your covid shots, people don't wear masks or keep social distancing here.
>Нет войны. Over
At the beginning of last paragraph, should be "войне"
Have you heard about Communist Party?
I can't see how this is argument in good faith. Your choice of using Russia instead of USSR feels intentionally misleading.
The American Government says lots of hypocritical things about regime change and interfering with elections and so on; I think this is bad and wish they wouldn't do it.
Imagine if reaction to this war was like reaction to Iraq. That is scary even for me - as ethnically Russian I feel measure of personal responsibility for this. I did not want Russia to become the second US. But at the same time the difference is striking.
UPD: I am talking about reaction in West, here in Kyrgyzstan reactions are about the same - yeah,"wish they wouldn't do it" (careful, low confidence, it is hard to judge general public opinion by few datapoints)
It don't expect war to be quick and easy a priory. Taking Kiev? Thought alone is nightmare fuel.
I was talking about timeline up to war, not war itself. Ah, I see, happening instead of happened, fixed.
It is an issue that I noted myself and it came up repeatedly in other descriptions: recognition of D/LNR, military moving there, war - all happened too fast. Like script on fast forward. Why?
So it seems China knew something and effectively siding with Russia. But one thing got my attention: they seemed to be surprised by the speed of it. From multiple sources I see this surprise again and again. What happened? Is it just favorable weather conditions?
I don't see avenue for negotiations. Any attempts before was answered with "How dare you?!" and "This cannot stand!". How do you think it was going to be now, after invasion? One reason I was thinking invasion is not possible was that. Complete break up of any cooperative relationship. But maybe Russia looks at it a bit differently: it is already pretty much completely broken, so what's the point? (No excuse for shooting war though - I have no idea why that needed to happen)
"looking at" as in "anticipate"
Opinions of few people from Kyrgyzstan, middle age+
Nobody understands why this invasion started (this seem to be true for Russians too), did not want Russia to invade, scared and disheartened by war. Many have relatives in Ukraine. But! Suspect some unknown reasons for this to happen, probably intel on NATO deployment (Invasion seems rushed, but I'd consider it weak evidence - numbers of alternative explanations.)
Also a bit of clarification on 2014 popularity: it was invasion semantically but casualties extremely low, while right now we are looking at rivers of blood. Wonder how Russian population reacting.
Oh wow, we have scary development. I did not believe this can happen til the end. Yugoslavia all over again. This has to mean complete abandon of cooperative relationship with the West. That is big decision, huge economic consequences, did not expect Putin capable of it.
Other big reason why this was unexpected to me is China. China had to knew about it and still agreed to back up Russia. They are not going out of it completely unscathed, are they? That cases me to update on Taiwan invasion significantly (still low but not negligible anymore). Interested to hear official China reaction.
As in it provides (extremely weak) pretext for invasion? It is same as Armenia which is also in defensive treaty with Russia and governments of those regions are dependent on Russia so likelihood of them getting adventurous by themself is low. It is hard to imagine Ukrainian forces leaving Donbas without a fight so we are speaking about actual invasion here, with shooting and a lot of dead bodies. If that happens it will happen because someone wanted it to happen, not because of technical pretext.
Oh wow, we have interesting development. I did not expect any, so my neck have been chopped somewhat, and I still don't understand why. Apparently there is some urgency at play here.
On the other hand situation is consistent with my understanding of Russian strategy: put actual servicemen there, so any attack will be attack on actual Russian army with dire consequences, international law or no.
So there will be technically invasion, but Russian kind, without dead bodies. I was wondering how situation will resolve and this resolution seems both obvious in retrospect and far more benign than what I imagined. (Of course it is far from actually resolved, this is more of a prediction/hope. In particular I can't answer "why now?")
On meta level, making prediction and being proving wrong is more fun than I expected, should write my thoughts down more often.
While NATO is a defence organization, Ukraine hopes that the membership in NATO will help Ukraine to attack Donbas and Crimea, which are its territories according to international laws – without risks of Russian retaliation.
Russian retaliation is matter of Russian, not international law. Ascribing such naivety to Ukrainians is a bit too much for my world model. You seem to operate under assumption that Putin wants to attack and struggle to come up with rationale for his motive, while failing to address much simpler possibility of assumption being wrong to begin with. He was supposed to attack yesterday, did he? And a week before, and a week before that? Maybe it is time for update on evidence?
"My preferences" seem to be confused statement. There is how "me" looks from outside and how I look at myself from inside, but those are two different thing unless my conscious self is all that is to "me". Any algorithm trying to find my true preferences have to analyze my brain, find conscious part(s?!) and somehow understand it. Anything else runs the risk of imprisoning me in my own body, mute and helpless.
How about historical precedent? Gods did arrive on Spanish ships to South America.
>So children are good, and just because not everyone agrees on that doesn't somehow make children not good.
I can't make sense of this statement. If my terminal value is that children are bad then that's it. How can you even start argue with that?
They are slaves. But it does not really matter. I may be wrong. You may be wrong. We may have different terminal values. Point is, not all people think that children are good even right now, much less in far in the future. Idea that grabby civilization will consist of many individuals is not given. Who knows, may be they even become singe individual. Sound like a bad idea to me now, but may be it will become obvious choice for future me?
I find an idea of having children to be morally abhorrent. Granted, right now alternatives are even worse so it is lesser evil - but evil none the less. Presumably grabby civilization able come up with better alternatives. Maybe you will become individual of grabby civilization, individual with great personality and rich inner world spanning multiple solar systems?
In this scenario we giving the doctor an awful lot of power. We need to have unreasonably high trust in him to make such choice practical. Handling other people lives often have funny effects on one morality.
There is another problem: In reality, shit happens. But our intuition says something like "Shit happens to other people. I have control over my life, as long as I do everything right no shit will ever happen to me". See? In my mind there is no way I ever need organ transplant so such arrangement is strictly detrimental, while in reality chances are I will be saved by it. Again there are good reasons for such intuition, as it promotes responsibility for own actions. If we are to make health "common property" who will be responsible for it?
I suspect it is easier to solve artificial organs than human nature.