Posts

Anders Lindström's Shortform 2024-06-12T11:30:18.621Z

Comments

Comment by Anders Lindström (anders-lindstroem) on Why muscle tension can be unsexy · 2024-12-06T10:06:18.871Z · LW · GW

Being tense might also be a direct threat to your health in certain situations. I saw an interview on TV with an expert on hostage situations some 7-10 years ago and he claimed that the number one priority for a captive should be to somehow find a way to relax their body. He said if they are not able to do that, the chance is very high that they will develop PTSD.

Comment by Anders Lindström (anders-lindstroem) on Should there be just one western AGI project? · 2024-12-04T11:07:58.384Z · LW · GW

It would for national security reasons be strange to assume that there already now is no coordination among the US firms. And... are we really sure that China is behind in the AGI race? 

Comment by Anders Lindström (anders-lindstroem) on Estimates of GPU or equivalent resources of large AI players for 2024/5 · 2024-12-01T09:55:59.504Z · LW · GW

And one wonder how much the bottleneck is TSMC (the western "AI-block" have really put a lot of their eggs in one basket...) and how much is customer preference towards Nvidia chips. The chips wars 2025 will be very interesting to follow. Thanks for a good chip summary!

Comment by Anders Lindström (anders-lindstroem) on Estimates of GPU or equivalent resources of large AI players for 2024/5 · 2024-11-30T22:34:04.054Z · LW · GW

What about AMD? I saw that on that latest supercomputer TOP500 list that systems that uses bout AMD CPU and GPU now holds the places 1,2,5,8 and 10, among the top 10 systems. Yes, workloads on these computers are a bit different from a pure GPU training cluster, but still. 

https://top500.org/lists/top500/2024/11/

Comment by Anders Lindström (anders-lindstroem) on Dave Kasten's AGI-by-2027 vignette · 2024-11-28T12:49:42.990Z · LW · GW

Yes, the soon-to-be-here "human level" AGI people talk about is for all intent and purposes ASI. Show me one person who is at the highest expert level on thousands of subjects and that have the content of all human knowledge memorized and can draw the most complex inferences on that knowledge across multiple domains in seconds.

Comment by Anders Lindström (anders-lindstroem) on Counting AGIs · 2024-11-27T22:32:42.463Z · LW · GW

Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI's will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he "hallucinated" a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your "remote worker" a day or 10 to think through a really hard problem, not even the sky will be the limit...

Comment by Anders Lindström (anders-lindstroem) on Counting AGIs · 2024-11-26T21:51:14.057Z · LW · GW

Thanks for writing this post!

I don't know what the correct definition of AGI is, but to me it seems that AGI is ASI. Imagine an AI that is on super expert level in most (>95%) subjects and that have access to pretty much all human knowledge and is capable of digesting millions of tokens at a time and and can draw inferences and conclusions from that in seconds. "We" normally have a handful of real geniuses per generation. So now a simulated person that is like Stephen Hawkings in Physics, Terrence Tao in Math, Rembrandt in painting etc etc, all at the same time. Now imagine that you have "just" 40.000-100.000 of these simulated persons able to communicate at the speed of light and that can use all the knowledge in the world within millisecond. I think there there will be a very transformative experience for our society from the get go.

Comment by Anders Lindström (anders-lindstroem) on A very strange probability paradox · 2024-11-24T14:28:40.639Z · LW · GW

The thing is that, if you roll a 6 and then a non-6, in an "A" sequence you're likely to just die due to rolling an odd number before you succeed in getting the double 6, and thus exclude the sequence from the surviving set; whereas in a "B" sequence there's a much higher chance you'll roll a 6 before dying, and thus include this longer "sequence of 3+ rolls" in the set.

 

Yes! This kind of kills the "paradox". Its approaching an apples and oranges comparison.

Surviving sequences with n=100 rolls (for illustrative purposes)

[6, 6]
[6, 6]
[2, 6, 6]
[6, 6]
[2, 6, 6]
[6, 6]
Estimate for A: 2.333
[6, 6]
[4, 4, 6, 2, 2, 6]
[6, 6]
[6, 2, 4, 4, 6]
[6, 4, 6]
[4, 4, 6, 4, 6]
[6, 6]
[6, 6]
Estimate for B: 3.375

if you rephrase

: The probability that you roll a fair die until you roll two  in a row, given that all rolls were even.

: The probability that you roll a fair die until you roll two non-consecutive  (not necessarily in a row), given that all rolls were even.

This changes the code to:

A_estimate = num_sequences_without_odds/n

B_estimate = num_sequences_without_odds/n

And the result (n=100000) 

Estimate for A: 0.045
Estimate for B: 0.062

I guess this is what most people where thinking when reading the problem, i.e., its a bigger chance of getting two non consecutive 6s. But by the wording (see above) of the "paradox" it gives more rolls on average for the surviving sequences, but you on the other hand have more surviving sequences hence higher probability.

Comment by Anders Lindström (anders-lindstroem) on OpenAI's CBRN tests seem unclear · 2024-11-22T10:02:45.723Z · LW · GW

With the scaling in compute it will not take long until small groups or even a single individual can train or fine tune an open source model to reach o1s level (and beyond). So I am wondering about the data. Does for instance o1 training set in these subjects contain data that is very hard to come by or is it mostly publicly available data? If it is the first, the limiting factor is the access to data and it should be reasonably easy to contain the risks. If it is the latter... O´boy...

Comment by Anders Lindström (anders-lindstroem) on What are the good rationality films? · 2024-11-20T23:10:48.095Z · LW · GW

"The Prime Gig": explores the life of Pendleton "Penny" Wise, a charismatic but morally conflicted telemarketer, as he navigates the cutthroat world of high-stakes sales schemes. Torn between ambition, romance, and integrity, he must decide whether to pursue wealth at the cost of his principles.

https://m.imdb.com/video/embed/vi4227907353/

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:52:33.733Z · LW · GW

I know I am a parrot here, but they are playing two different games. One wants to find One partner and the stop. The other one want to find as many partners as possible. You can not you compare utility across different goals. Yes. The poly person will have higher expected utility, but it is NOT comparable to the utility that the mono person derives. 

The wording should have been:
10% chance of finding a monogamous partner 10 times yields 1 monogamous partners in expectation and 0.63 in expected utility.
Not:
10% chance of finding a monogamous partner 10 times yields 0.63 monogamous partners in expectation.

and:
10% chance of finding a polyamorous partner 10 times yields 1 polyamorous partner in expectation and 1 in expected utility.
instead of:
10% chance of finding a polyamorous partner 10 times yields 1.00 polyamorous partners in expectation.

So there was a mix up in expected number of successes and expected utility.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:09:11.065Z · LW · GW

Yes. But I think you have mixed up expected value and expected utility. Please show your calculations.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:04:18.893Z · LW · GW

I do not understand your reasoning. Please show your calculations.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T16:17:04.478Z · LW · GW

No, I think you are mixing the probability of at least one success in ten trails (with a 10% chance per trail), which is ~0.65=65%, with the expected value which is n=1 in both cases. You have the same chance of finding 1 partner in each case and you do the same number of trails. There is a 65% chance that you have at least 1 success in the 10 trails for each type of partner. The expected outcome in BOTH cases is 1 as in n=1 not 1 as in 100%

Probability of at least one success: ~65%
Probability of at least two success: ~26%

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T13:45:34.134Z · LW · GW

Why would is the expectation to find a polyamorous partner be higher in the case you gave? Same chance per try and same number of tries should equal same expectation.

Comment by Anders Lindström (anders-lindstroem) on The Online Sports Gambling Experiment Has Failed · 2024-11-18T09:38:33.441Z · LW · GW

Good write up! People Cannot Handle "fill in the blank" on smartphones. Sex, food, drugs, social status, betting, binge watching, shopping etc. in abundance and a click away is something we cannot not handle. If some of biggest corporations in the world spends billions upon billions each year to grab our attention, they will win and "you" will on average loose, unless you pull the cord (or turn off the wifi...) or have extreme will power. 

I am definitely not the one to throw the first rock, but is it not pretty embarrassing that most of us who thought we were so smart and independent are mere serfs, both intellectually and physically, to a little piece of electronics that have completely and utterly hijacked our brains and bodies.

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-13T08:08:24.357Z · LW · GW

yeah, that comic summarize it all!

As a side note, I wonder how many would get their PhD degree if the requirement was to publish 3-4 papers (2 single author and 1-2 co-author) were the main result (where it's applicable) needed to have p<0.01? Perhaps the paper publishing frenzy would slow down a little bit if the monograph came into fashion again?

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-12T20:43:59.195Z · LW · GW

Agree. I have never understood why p=0.05 is a holy threshold. People (and journals) toss out research if they get p=0.06 but they think they are their way to a Nobel prize with p=0.04. Madness.

Comment by Anders Lindström (anders-lindstroem) on evhub's Shortform · 2024-11-09T18:50:51.100Z · LW · GW

We live in an information society -> "You" are trying to build the ultimate dual use information tool/thing/weapon -> The government require your service. No news there. So why the need to whitewash this? What about this is actually bothering you?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T19:50:56.944Z · LW · GW

I understood why you asked, I am also interested in general why people up or down vote something. I could be a really good information and food for thought.

Yeah, who doesn't want capital T truth... But I have come to appreciate the subjective experience more and more. I like science and rational thinking, it has gotten us pretty far, but who am I the question someones experience. If someone met 'the creator' on an ayahuasca journey or think that love is the essence of universe, who am I to judge. When I see the statistics on the massive use of anti-depressants it is obvious to me that we can´t use rational and logical thinking to think our way out from our feelings. What are rationality and logical thinking good for if it in the end can't make us feel good?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T09:22:18.128Z · LW · GW

When it comes to being up- or down-voted its a real gamble. There are of course the same mechanism at play here just as on other social (media) platforms, i.e., there are certain individuals, trends and thought patterns that are hailed and praised and vice versa without any real justification. But hey, that is what makes us human, these unexplainable things called feelings.

PS perhaps a new hashtag on X would be appropriate #stopthedownvoting

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-04T20:36:51.516Z · LW · GW

Some stuff just works but for reasons unknow to the practioner. Trail and error is a very powerful tool if used over over many generations to "solve" a particular problem. But that do not mean anyone know WHY it works.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-27T22:23:29.625Z · LW · GW

I am so thrilled! Daylight saving time got me to experience (kind of) the sleeping beauty problem first hand.

Last night we in Sweden changed our clocks back one hour at 03.00 to 02.00 and went from “summertime” to the dreaded “wintertime”. It’s dreaded because we know what follows with it, ice storms and polar bears in the streets... 

Anyways, I woke up in the middle of the night and I reached for my phone to check what time it was. It was 02.50. Then it struck me. Am I experiencing the first 02.50 or the second 02.50 this night, i.e. have I first slept to 03, then the clock have changed back to 02 (which it automatically does on the phone) and then slept until 02.50 the new time or am I on the first 02.50 and in 10 minutes at 03 the clock will switchback to 02? 

It was a very dizzying thought. I could not for my life say either or. There was nothing in the dark that could give me any indication weather I was experiencing the first or the second 02.50. Then with my thoughts spinning I slowly waited for the clock on my phone to turn 03. When it did, it did not go back to 02, I had experienced the second 02.50 that night.

Comment by Anders Lindström (anders-lindstroem) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-17T21:34:41.834Z · LW · GW

Maybe I was a bit vague. I was trying to say that waking up SB's twin sister on monday was a way of saying that SB's would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday.  For that reason she is only awakened ONE time no matter if it is heads or tails.  She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the "first" time and the "last" time". If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows.

Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you  the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads?

But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?

Comment by Anders Lindström (anders-lindstroem) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T19:23:58.472Z · LW · GW

If the experiment instead was constructed such that:

  1. If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  2. If the coin comes up tails, Sleeping Beauty's twin sister will be awakened and interviewed on Monday and Sleeping Beauty will be awakened and interviewed on Tuesday.

In this case it is "obvious" that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs. 

Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?

Comment by Anders Lindström (anders-lindstroem) on Shortform · 2024-10-09T21:02:17.116Z · LW · GW

I think it is just the cumulative effect that people see yet another prominent AI scientist that "admits" that no one have any clear solution to the possible problem of a run away ASI. Given that the median p(doom) is about 5-10% among AI scientist, people are of course wondering wtf is going on, why are they pursuing a technology with such high risk for humanity if they really think it is that dangerous.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-08T10:35:55.938Z · LW · GW

Congratulations to Geoffrey Hinton and John Hopfield! 
I wonder if Roko's basilisk will spare the Nobel prize committee now: https://www.nobelprize.org/prizes/physics/2024/press-release/

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-04T09:45:03.725Z · LW · GW

That is a very interesting perspective and mindset! Do you  in that scenario think you will focus on value created in terms of solving technical problems or do you think you will focus on "softer" problems that are more human wellbeing centric?

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-04T09:38:39.085Z · LW · GW

Thanks for your input. I really like that you pointed out that AI is just one of many things that could go wrong, perhaps people like me and others are to caught up in the p(doom) buzz that we don't see all the other stuff.

But I wounder one thing about your Plan B, which seems rational, that what if a lot of people have entry-level care work as their back-up. How will you stave of that competition? Or do you think its a matter of avoiding loss aversion and get out of your Plan A game early and not linger (if some pre-stated KPI of yours goes above or below a certain threshold) to grab one of those positions?

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-01T00:00:14.376Z · LW · GW

Now when AGI seems to arrive in the near term (2-5 years) and ASI is possibly 5-10 years away (i.e. a few thousand days), what do you personally think will help you stay relevant and in demand? What do you read/watch/study/practice? What skills are you focusing on to sharpen? What "plan B" and "plan C" do you have? What field of work/study would you recommend others to steer away from ASAP?

Asking for a friend...

Comment by Anders Lindström (anders-lindstroem) on What is "True Love"? · 2024-08-20T12:07:56.491Z · LW · GW

I think David Buss book "When men behave badly" is a good starting point to try to understand the dynamics in hetrosexual dating and mating. 

Comment by Anders Lindström (anders-lindstroem) on Rabin's Paradox · 2024-08-15T10:08:36.787Z · LW · GW

Real life translations:

Expected value = That thing that never happens to me unless it is a bad outcome

Loss aversion = Its just the first week of this month and I have already lost 12 arguments, been ripped off twice and have gotten 0 likes on Tinder

A fair coin flip = Life is not fair

Utility function = Static noise

Comment by Anders Lindström (anders-lindstroem) on GPT-4o System Card · 2024-08-09T07:38:31.350Z · LW · GW

Thank you for refreshing my memory

Comment by Anders Lindström (anders-lindstroem) on GPT-4o System Card · 2024-08-08T21:49:55.367Z · LW · GW

Perhaps my memory fails me, but didn´t Anthropic say that they will NOT be the ones pushing the envelope but playing it safe instead? From the METR report : "GPT-4o appeared more capable than Claude 3 Sonnet and GPT-4-Turbo, and slightly less than Claude 3.5 Sonnet."

Comment by Anders Lindström (anders-lindstroem) on AI Rights for Human Safety · 2024-08-03T11:05:52.112Z · LW · GW

Existing legal institutions are unprepared for the AGI world.

 

Every institution is unprepared for the AGI world. And judging from history, laws will always lag behind technological development. I do not think there is much a lawmaker can do than to be reactive to future tech, I think there are just to many "unkown unkowns" to be proactive. Sure you can say "everything is forbidden", but that do not work in reality.  I guess the paradox here is that we want the laws to be stable over time but we also want them to be easy to change on a whim.

Comment by Anders Lindström (anders-lindstroem) on Bryan Johnson and a search for healthy longevity · 2024-07-28T09:50:11.078Z · LW · GW

The question one should perhaps ask is why you would like to live "forever"? Of course I understand the idea of having a healthy body, there is no fun being sick and in pain. But since we have no idea where we came from, where we are, and where we are going, perhaps there is as much meaning in death as we think there is in life. 

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-09T15:54:59.024Z · LW · GW

For starters it could be used as a diplomatic tool with tremendous bargin power as well as a deterrent to anyone that wanted to challenge US post war dominance in all fields. 

Now imagine what a machine that is better in solving any problem in all of science than all the smartest people and scientists in world. Would not this machine give the owners EXTREME advantages in all things related to government/military/intelligence?! 

 

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-09T15:33:36.619Z · LW · GW

Well, how many of in congress and the senate heard about the Manhattan project? 
"Keeping 120,000 people quiet would be impossible; therefore only a small privileged cadre of inner scientists and officials knew about the atomic bomb's development. In fact, Vice-President Truman had never heard of the Manhattan Project until he became President Truman."
https://www.ushistory.org/us/51f.asp

When it comes to the scientists, we have no idea if the work they do in "private" companies is a part of a bigger government led effort. Which would be the most efficient way I suppose.

I don't really understand why some people seem to get so upset about the idea that the government/military is involved in developing cutting edge technology. Like AI is some thing that governments/militaries are not aloud to touch? The military industrial complex has been and will always be involved in this kind of endeavors.  

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-07T10:45:47.522Z · LW · GW

To be clear, I am confident that governments and militaries will be extremely interested in AI.

It makes perfect sense that it will turn into a Manhattan project, and it probably (p>0.999999...) already has. The idea that the government, military, and intelligence agencies have not yet received the memo about AI/AGI/ASI is beyond naive.


Just like the extreme advantages of being the first to develop a nuclear bomb, being the first to achieve AGI might carry the same EXTREME advantages.

Comment by Anders Lindström (anders-lindstroem) on Finding the Wisdom to Build Safe AI · 2024-07-05T12:50:54.645Z · LW · GW

Are we really sure that we should model AI's in the image of humans? We can apparently not align people with people, so why would a human-replica be that different? If we train an AI to behave like a human, why do we expect the AI to NOT behave like a human? Like it or not, but part of what makes us human is lying, stealing, and violence.

"Fifty-two people lost their lives to homicide globally every hour in 2021, says new report from UN Office on Drugs and Crime". https://unis.unvienna.org/unis/en/pressrels/2023/uniscp1165.html

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-28T16:02:52.520Z · LW · GW

I am starting to believe that military use of AI is perhaps the best and fastest way to figure out if large scale AI alignment is possible at all. Since the military will actively seek to develop AI's that kills humans, they must also figure out how not to kill ALL humans. I hope the military will be open with their successes and failures about what works and do no work.

Comment by Anders Lindström (anders-lindstroem) on Childhood and Education Roundup #6: College Edition · 2024-06-26T23:28:00.427Z · LW · GW

The other was taught by a Harvard prof. He informed us TFs that an A is the default grade. A- would require justification.

Great post!

But can this be true? I don't care if it is fair or not to do so. I just wonder if Harvard would be so stupid to destroy their own brand, If people that are hiring Harvard students start to understand that the grades do no reflect the students knowledge in a subject even a little bit it can go south pretty fast.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T22:22:18.803Z · LW · GW

Thanks for pointing me to Zvi's work

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T22:20:16.667Z · LW · GW

Yes, I meant an LLM in the context of a user that fed in a query of his or her problem and got a novel solution back. There is always debatable what a "real" or "hard" problem is, but as a lower bound I mean something that people here at LW would raise an eyebrow or two if the LLM solved. Otherwise there are as you mention plenty of stuff/problems that "custom" AI/machine learning models have solved for a long time.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T11:03:44.987Z · LW · GW

I have yet to read a post here on LW were someone write about a frontier model that have solved a "real" problem were the person really had tried for a long(-ish) time to solve it but failed and then the AI solved it for them, like a research problem, a coding problem, a math problem, a medical problem, a personal problem etc. Has anyone experienced this yet?

Comment by anders-lindstroem on [deleted post] 2024-06-22T11:11:29.080Z

What you say in your post is common sense. Unfortunately there is no room for common sense questions in the race for AGI/ASI. From what people in the industry say about their own future predictions, AGI/ASI seems potentially VERY dangerous. However, we can not stop for a second and think/talk about it.

Where I think the biggest misalignment is right now, is not the AI models vs Humans. GPT, Claude, Gemini et al. are all very well aligned with humans. What is NOT aligned at all with humans are the AI companies plans for the future. 

At least in western countries, minute details of civil laws or tax code with a p(doom)<0.00000000000000000001 can be publicly debated for years before they are implemented. But with a technology that by AI insiders have been predicted to have a p(doom)>0.05, we (the people) should just accept that risk and be quiet, because we are too stupid to understand that all deaths are not equal...

Are there any alternatives to AGI/ASI for solving big problems. Here is a quiz:

Cure cancer?
A) AGI/ASI
B) Stopping eating ultra processed food

End world hunger?
A) AGI/ASI
B) Redistribution of food

Stop (human influenced) climate change?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon

Stop microplastic pollution of the oceans?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon

Comment by Anders Lindström (anders-lindstroem) on Ilya Sutskever created a new AGI startup · 2024-06-20T14:57:58.485Z · LW · GW

Come on now, there is nothing to worry about here. They are just going to "move fast and break things"...

Comment by Anders Lindström (anders-lindstroem) on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T11:59:32.197Z · LW · GW

Don't worry, fusion power is just 10 years away...

This paper is perhaps old news for most of you interested in energy, but I find it to be a good conversation starter when it comes to what kind of energy system we should aim for. 

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-20T11:39:59.945Z · LW · GW

For the record. I do not mean to single out Altman. I am talking in general about leading figures (i.e. Altman et al.) in the AI space for which Altman have become a convenient proxy since he is a very public figure. 

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-18T22:07:19.375Z · LW · GW

Current median p(doom) among Ai scientists seem to be 5-10%. How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?

Imagine for a second that I am a world leading scientist dabbling with viruses at home that potentially could give people eternal life and health, but that I publicly would state that "based on my current knowledge and expertise there is maybe a 10% risk that I accidently wipeout all humans in the process, because I have no real idea how to control the virus". Would you then:

A) Call me a reckless idiot, send a SWAT team that put me behind bars, and destroy my lab and other labs that might be dabbling with the same biotech.

B) Say "let the boy play".