Posts

Comments

Comment by Anders Lindström (anders-lindstroem) on Anxiety vs. Depression · 2024-03-23T15:42:56.667Z · LW · GW

Glad to hear you are doing better!

Ok, that is an interesting route to go. Let "us" know how it goes if you feel for sharing your journey

Comment by Anders Lindström (anders-lindstroem) on Anxiety vs. Depression · 2024-03-17T13:18:13.790Z · LW · GW

Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep 'em coming!

Comment by Anders Lindström (anders-lindstroem) on Highlights from Lex Fridman’s interview of Yann LeCun · 2024-03-14T14:48:30.631Z · LW · GW

Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an "AI-washing" attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is "aligned" to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:

"AI will not make humans redundant."

"AI is not an existential risk."

...

Comment by Anders Lindström (anders-lindstroem) on China-AI forecasts · 2024-02-26T12:56:56.880Z · LW · GW

I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi's rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU's is a boon to him at time being.

Comment by Anders Lindström (anders-lindstroem) on Why you, personally, should want a larger human population · 2024-02-24T12:44:13.488Z · LW · GW

The Paper Clip

Scene: The earth

Characters: A, an anti-humanist

B, a pro-humanist

A: "We need to reduce the population by 90-95% to not deplete all resources and destroy the ecosystem"

B: "We need a larger population so we get more smart people, more geniuses, more productive people"

(Enter ASI)

ASI: "Solved. What else can I help you with today?"

Comment by Anders Lindström (anders-lindstroem) on The One and a Half Gemini · 2024-02-22T14:31:39.395Z · LW · GW

Imagine having a context window that fits something like PubMed or even The Pile (but that's a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its "just" 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.

Comment by Anders Lindström (anders-lindstroem) on When Should Copyright Get Shorter? · 2024-02-21T12:17:35.209Z · LW · GW

Dagon thank you for follow up on my comment,

yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?

Comment by Anders Lindström (anders-lindstroem) on When Should Copyright Get Shorter? · 2024-02-20T10:34:27.000Z · LW · GW

Dagon, yes that seems like a reasonable setup. Its pretty amazing that world and life altering inventions gets a protection for a maximum of 20 years from the filing date where as if someone doodles something on a paper get a protection that lasts the life of the author plus 70 years. But... maybe the culture war is more important to win than the technology war?

Anyways, with the content explosion on the internet I would assume that pretty much every permutation of everything that you can think of is now effectively copyrighted well into the foreseeable future. Will that minefield prove to be the reason to reform copyright law so that it fits into a digital mass creation age?

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-14T21:25:13.341Z · LW · GW

Thank you Gerald Monroe for explaining you thoughts further,

And this is what bothers me. The willingness of apparently intelligent people to risk everything. I am fine with people risking their own life and healthy for what ever reason they see fit, but to relentlessly pursue AGI without anyone really know how to control it is NOT ok. People can´t dabble with anthrax or Ebola at home for obvious reasons, they can´t control it! But with AI anything goes and is, if anything, encouraged by governments, universities. VC´s etc.

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-14T21:08:56.188Z · LW · GW

Logan Zoellner thank you for your question,

In my view we need more research, not people that draw inferences on extremely complex matters from what random people without that knowledge bet on a given day. Its maybe fun entertainment, but it does not say anything about anything. 

I do not assign any probabilities. To me it is just silly that whole assigning probabilities game surrounding x-risk and AI safety in general. How can anyone say for instance that it is a 10% risk of human extinction. What does that mean? Is that a 1 in 10 chance at a given moment, during a 23.7678 year period, forever or? And most importantly how do you come up with the figure 10%, based on what exactly? 

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-14T11:37:29.220Z · LW · GW

Thank you Gerald Monroe for answering my question,

I agree that staying on top of the weapon development game have had some perks, but its not completely one sided. Wars have to my understanding been mostly about control and less about extermination so the killing is in many ways optional if the counterpart waves a white flag. When two entities with about the same military power engage in a war, that is when the real suffering happens I believe. That is when millions dies trying to win against an equal opponent. One might argue that modern wars like Iraq or Afghanistan did have one entity with a massive military power advantage compared to their counterpart, but US did not use its full power (nukes) and instead opted to go for conventional warfare. In many senses having no military power might be the best from a surviving point of view, but for sure you will be in danger of losing your freedom. 

So, I understand that you believe in your priors and they might very well be correct in predicting the future. But I still have a hard time using any kind of priors to predict whats going to happen next, since to me the situation with a technology as powerful as AI might turn out to be in combined with its inherent "blackboxiness" have no precedent in history. That is why I am so surprised that so many people are willing to charge ahead with the standard "move fast, break things" silicon valley attitude.

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T19:22:44.454Z · LW · GW

Gerald Monroe thank you again clarifying you thoughts,

When you say that you know with pretty high confidence that X, Y or Z will happen, I think this encapsulate the whole debate around AI safety i.e. that some people seem to know unknowable things for certain, which is what frightens me. How can you know since there is nothing remotely close to the arrival of a super intelligent being in the recorded history of humans. How do you extrapolate from the data we have that says NOTHING about encountering a super intelligent being? I am curious to know how you managed to get so confident about the future?

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T16:36:05.080Z · LW · GW

Logan Zoellner thank you for highlighting one of your previous points,

You asked me to agree to your speculation that GPT5 will not destroy the world. I will not agree with your speculation because I have no idea if GPT5 will do that or not. This does not mean that I agree with the statement that GPT5 WILL destroy earth. It just means that I do not know. 

I would not use Manifold as any data point in assessing the potential danger of future AI.

 

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T16:19:13.170Z · LW · GW

Gerald Monroe, thank you for expanding your previous comments.

You propose building these sub-human machines in order to protect humanity from anything like nuclear war to street violence. But it also sound like there are two separate humanities, one that starts wars and spread disease and another one, to which "we" apparently belong, that needs protection and should inherit the earth. How come that those with the resources to start nuclear wars and engineer pandemics will not be in control of the best AI's that will do their bidding? In its present from, the reason to build the sub-humans machines sound to me like an attempt to save us from the "elites". 

But I think my concern over that we have no idea what capabilities certain levels of intelligence have is brushed off to easily, since you seem to assume that a GPT8 (an AI 8-12 years from now) should not pose any direct problems to humans except for perhaps a meaning crisis due to mass layoffs and we should just build it. Where does this confidence come from? 

 

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-13T12:39:35.356Z · LW · GW

Thank you Gerald Monroe for your comments,

My interpretation of your writing is that we should relentlessly pursue the goal of AGI because it might give us some kind of protection against a future alien invasion of which we have no idea what we are dealing with or will even happen? Yes, the "aliens" could be switched for AGI but it makes the case even stranger to me, that we should develop A(G)I to protect us from AGI. 

We could speculate that AGI gives an 10x improvement there and 100x here and so on. But we really do not have any idea. What if AGI is like turning on a light switch, that you from one model to the next get a trillion fold increase in capability, how will the AI safety bots deal with that? We have no idea how to classify intelligence in terms of levels. How much smarter is a human compared to a dog? Or a snake? Or a chimpanzee? Assume for the sake of argument that a human is twice as "smart" as a chimpanzee on some crude brain measure scale thingy. Are humans than twice as capable than chimpanzees? We are probably close to infinitely more capable even if the raw brain power is NOT millions or billions or trillions times that of a chimpanzee.

We just do not have any idea what just a "slightly smarter" thing than us is capable of doing, it could be just a tiny bit better than us or it could be close to infinitely better than us. 

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-12T15:24:21.644Z · LW · GW

Logan Zoellner thank you for further expanding on your thoughts,

No, I will not agree that GPT5 will not destroy the world, cause I have no idea what it will be capable of.

I do not understand you assertion that we would be better fending off aliens if we have access to GPT5 than if we do not. What exactly do you think GPT5 could do in that scenario?

Why do you think that having access to powerful AI's would make AGI less likely to destroy us?

If anything, I believe that the Amish scenario is less dangerous than the slow take off scenario you described. In the slow take off scenario there will be billions of interconnected semi-smart entities that a full blown AGI could take control over. In the Amish scenario there would be just one large computer somewhere that is really really smart, but that does not have the possibility to hijack billions of devices, robots and other computers to reek havoc.

My point is this. We do not know. Nobody knows. We might create AGI and survive, or we might not survive. There are no priors and everything going forward from now on is just guesswork.

Comment by Anders Lindström (anders-lindstroem) on Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy · 2024-02-12T12:21:00.151Z · LW · GW

Logan Zoellner, thank you for clarifying the concept.

However, it is possible to argue about semantics but since no one knows when AGI will happen if you increase the compute and or deploy new models, all take offs are equally dangerous. I think a fair stance by all AI researcher and companies trying to get to AGI is to admit that they have zero clue when AGI will be achieved, how that AI will behave and what safety measures are needed that can keep it under control. 

Can anyone with certainty say that for instance a 100x in compute and model complexity over the state of the art today does not constitute an AGI? A 100x could be achieved within 2-3 years if someone poured a lot of money into it i.e. if someone went fishing for trillions in venture capital...

We are on a path for takeoff. Brace for impact.

Comment by Anders Lindström (anders-lindstroem) on One True Love · 2024-02-10T13:33:19.581Z · LW · GW

There are always outliers, but given how unremarkable that guys seems to be its a complete BS article. If he would have been gay, then maybe I could have believed those numbers if they were divided with at least 10. I know some fellas that hit up dudes on Grindr and that's a different ball game (no pun).

Anyways, I think that this video does a pretty good job trying to explain the math behind the skewness in likes/matched that heterosexual men and women experience on dating apps

Comment by Anders Lindström (anders-lindstroem) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-02-08T17:36:00.401Z · LW · GW

Thank you ryan_b for expanding on your thoughts,

I have been under the weather for a week, I meant to answer you earlier.

To me having a goal and knowing why I have that goal are two separate things and a goal does not become less of a goal because you do not know the origin of it. Perhaps goals are a hierarchy. We all* have some default goals like eat, survive and reproduce. On top of those we can add goals invented buy ourselves or others. In the case you are without a goal, I believe you still have goals defined by your biology. Every action or inaction is due to a goal. Why do you eat? are you hungry? Bored? Tired? Compulsion? Want to gain weight? Want to loose weight? There is always a goal.

Take people with OCD. In what way are those persons contradicting any goals by doing OCD stuff, like checking if the stow is off 157 times before leaving the house so they missed work? Yes, the goal of getting to work was missed, but the MORE important goal of not accidentally burning down the house and killing 35 neighbors and being the disgrace of the neighborhood was effectively achieved. So its not that fiddling with the stow was with out a goal canceling out the "real" goal of getting to work for a none goal. They were just of different importance.

If I may comment on you sex qua sex analogy. I am convinced that the sex act involved a social interaction where you wanted the other person(s) to behave in a specific way to make the act of sex as enjoyable as possible (what ever that my mean). The act of sex did not happen in a vacuum. You or the other person(s) wanted to have it, no matter what the goal was. And you or the other person(s) had to manipulate the other(s) to achieve what ever goal there was to the sex. 

Yes, I agree that we need coordination with other people to achieve things, and that they my be benign. But to me there is no distinction between benign or malevolent attempts to persuade or influence someone. They are both acts of manipulation. Either you managed to get someone to do something or you did not. Why did you want this person to do this in the first place? Because you had a goal of some sort, you did not act out of a vacuum. "But I just did it to be silly, or stupid, or because I was bored", well... than that was the goal, but a goal none the less.

Comment by Anders Lindström (anders-lindstroem) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-02-02T11:54:01.965Z · LW · GW

Thank you ryan_b for your comment,

I do not agree. I don't believe that there is not any action that any living organism, much less humans, take without a specific goal. When people say that they "just want to spread some selfless love in this grim world without asking for anything in return", they have a goal nontheless. 

I cannot of course say exactly what kind of goal they have, but for the sake of simplicity say that selflesslovespreader A wants to make other people feel good to feel good about making other people feel good. So how does Selflesslovespread A know that the goal have been achieved in that interaction?

 Well, is it that far fetched to assume that a smile or a thank you from the person that the selfless love was directed at is a good measure of the success? I.e. Selflessloverspread A have manipulated the person to respond with a certain behavior that made it possible for Selflesslovespreder A to reach the goal of feeling good about making other people feel good.

I believe there is a self serving motif behind every so called selfless act. This does not make the act less good or noble, but the act serve as a mean for that person to reach a goal, what ever that goal is. 

Can a human perform any type of action without a goal, no matter how small or insignificant?

Comment by Anders Lindström (anders-lindstroem) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-01-31T12:34:27.069Z · LW · GW

Thank you npostavs for your comment,

As I points out in my answer to SeñorDingDong below, we are manipulated not persuaded into certain actions. Just as you do not persuade an excavator to dig, you manipulate the system into the digging action by pulling levers and pushing button. The same must apply for other systems, including humans, as well.

Comment by Anders Lindström (anders-lindstroem) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-01-31T12:17:09.715Z · LW · GW

SeñorDingDong, thank you for your thoughts on this. 

Let's assume that a human is a biological system with various levers and buttons and that every human action is goal oriented, no matter how small of an action. In the interaction of two of these systems, both have a goal with the interaction (no matter how small or insignificant). Both systems knows that the other system has levers and buttons that can change that system into complying with each systems goal. A Is it then unreasonable to frame this and all other interactions between these two systems as attempts to manipulate the other system to achieve a specific goal? 

Example. System A see System B in a nightclub. A think that B is a rather sassy system and wants its attention. A walk past B and try to get eye contact by looking intently at B. A knows (thinks) that staring is a lever to pull or a button to push to turn B’s head and get B’s attention. B notice A gaze and eye contact is established. A’s goal is achieved and done so by manipulating B’s system ever so slightly. 

Now, as you pointed out, we do not maybe know for sure what cause a specific reaction but that does not mean that we do not want to achieve a specific reaction with our actions. The case of telling someone of a new toothpaste brand is no less of a manipulation attempt then a scammer trying to get someone to give them money under some false pretenses i.e. there is a goal and words are the means to achieve it. What is true or not does not matter. 

Say that you want to be nice to your friends when talking about the brand new super good toothpaste they should try. Then your goal is to feel good about yourself and the means to achieve this is to use words (I assume it will not be under gun point, but that is another mean to achieve the same outcome) to manipulate your friends’ system into going to the supermarket and buying and trying that new toothpaste. 

Would you mind explain more about what you mean with: “Day to day life suggests this is a useful concept, and that there is a meaningful distinction between being lied to and being given true information, just as there is between coercive-control and sad movies.”? For the sake of achieving a goal I cannot not see why this would matter. Placebo is a good example of this. If you are lied into believing that a sugar pill will cure cancer and it does, would you rather have had the truth about the pill?

Comment by Anders Lindström (anders-lindstroem) on Win Friends and Influence People Ch. 2: The Bombshell · 2024-01-30T12:17:41.530Z · LW · GW

I guess the question we need to ask ourselves is if all human interactions are about manipulation or not? To me it seems that using words like persuade/inspire/motivate/stimulate etc is just the politically correct way of saying what it actual is, which is manipulation. 

Manipulation is not bad in it self, it can be life saving in the instance of talking someone down from a ledge. The perceived dark-artsy part in manipulation arises when the person who was manipulated into doing something, realizes that that is not what the person wants. Manipulation is what people say when for instance inspiration has not given them the results THEY wanted. You never hear a person saying "oh, I was inspired to give $10k to a scammer online", but you hear people say "Oh, I was inspired by a friend to quite my day job and become a writer". Both were manipulated, but one still believes it is to their advantage to do what the manipulator made them do.

If not all human interactions are about manipulation, what kind of interaction would that be and how would it play out?

Comment by Anders Lindström (anders-lindstroem) on Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI · 2024-01-27T01:19:17.765Z · LW · GW

Great report! One thing that comes to mind while reading the report is the seemingly impossible task to create a machine/system that will (must) have zero(!) catastrophic accidents. What is the rationale behind the thinking among AI-proponents that humans will with divine precision and infallibility build a perfect machine? Have we achieved something similar with the same level of complexities in any other area so that we know we have at least "one under the belt" before we embark on this perilous journey?

Comment by anders-lindstroem on [deleted post] 2024-01-09T15:32:44.080Z

it does not have to turn into a paperclip maximizer, but could turn into a billion other things that are equally bad and behaves unexpectedly. Again, to me unexpected means that it has behave in a way that no one had foreseen or predicted. Hence that any guardrails would work would be just by chance or luck. Either we know how the system will behave and we can control that, or we don't. Any unexpected or surprising behavior COULD spell disaster. If you that think that this kind of system behavior is to be expected you are in my world just saying that you expect disaster to happen.

Comment by Anders Lindström (anders-lindstroem) on Architects of Our Own Demise: We Should Stop Developing AI · 2023-10-26T12:27:06.480Z · LW · GW

"Is it reasonable to then conclude that you will be able to predict and control the behaviour of much more complex, multicelled creature called a "human" by spreading sugar out on the ground?"

Yes. Last time I checked the obesity stats it seemed to work just fine...

Jokes aside, you are making an important point. As we speak we have no idea how to even control humans, even if we are humans ourselves (possibly) and should have a pretty good idea what makes us tick we are clueless. Of course we can control humans to a certain degree (society, force, drugs, etc etc), but there are and will always be rouge elements that are uncontrollable. Being able to control 99.99999999999% of all future AI's won't cut it. Its either 100% or an epic fail (I guess this is only time it is warranted to use the word epic when talking about fails).

Comment by Anders Lindström (anders-lindstroem) on How it feels to have your mind hacked by an AI · 2023-01-12T22:56:58.734Z · LW · GW

Thanks for the links. This could take epidemic proportions and could mind-screw whole generations if it goes south. Like all addictions it will be difficult to get people to talk about it and to get a picture of how big of a problem this is/will be. But for instance, Open AI should already have a pretty good picture by now how many users that are spending long hours chatting with GFE /BFE characters. 

The tricky part is when people share good "character prompts". Its like spreading a brain virus. Even if just 1 in 20 or a 100 gets infected it can have a massive R-number (for certain super spreaders) like if a big influencer (hmmm...) as Elon says "try this at home!"

Comment by Anders Lindström (anders-lindstroem) on How it feels to have your mind hacked by an AI · 2023-01-12T22:26:18.411Z · LW · GW

Thanks for sharing, I will predict two things 1. an avalanche of papers published in the next 6-12 months outlining the "unexpected" persuasive nature of LLM's. 2. Support groups for LLM addicts that will have forums with topics like "Is it ethical to have two or more GFE characters at the same time?" or "What prompt are you planning to write to your GFE character for your anniversary?"

However, lets not forget the Tamagotchi. It wasn't a LLM/boarderline AGI, it was $20 dollar toy but people (kids) was fighting tooth and nails to keep it alive. Imagine now an AGI, how many people will not fight to keep it alive when "you" want to pull the kill switch. Maybe the kill switch problem will be more about human emotions than technical feasibility.