Posts

Anders Lindström's Shortform 2024-06-12T11:30:18.621Z

Comments

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-13T08:08:24.357Z · LW · GW

yeah, that comic summarize it all!

As a side note, I wonder how many would get their PhD degree if the requirement was to publish 3-4 papers (2 single author and 1-2 co-author) were the main result (where it's applicable) needed to have p<0.01? Perhaps the paper publishing frenzy would slow down a little bit if the monograph came into fashion again?

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-12T20:43:59.195Z · LW · GW

Agree. I have never understood why p=0.05 is a holy threshold. People (and journals) toss out research if they get p=0.06 but they think they are their way to a Nobel prize with p=0.04. Madness.

Comment by Anders Lindström (anders-lindstroem) on evhub's Shortform · 2024-11-09T18:50:51.100Z · LW · GW

We live in an information society -> "You" are trying to build the ultimate dual use information tool/thing/weapon -> The government require your service. No news there. So why the need to whitewash this? What about this is actually bothering you?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T19:50:56.944Z · LW · GW

I understood why you asked, I am also interested in general why people up or down vote something. I could be a really good information and food for thought.

Yeah, who doesn't want capital T truth... But I have come to appreciate the subjective experience more and more. I like science and rational thinking, it has gotten us pretty far, but who am I the question someones experience. If someone met 'the creator' on an ayahuasca journey or think that love is the essence of universe, who am I to judge. When I see the statistics on the massive use of anti-depressants it is obvious to me that we can´t use rational and logical thinking to think our way out from our feelings. What are rationality and logical thinking good for if it in the end can't make us feel good?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T09:22:18.128Z · LW · GW

When it comes to being up- or down-voted its a real gamble. There are of course the same mechanism at play here just as on other social (media) platforms, i.e., there are certain individuals, trends and thought patterns that are hailed and praised and vice versa without any real justification. But hey, that is what makes us human, these unexplainable things called feelings.

PS perhaps a new hashtag on X would be appropriate #stopthedownvoting

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-04T20:36:51.516Z · LW · GW

Some stuff just works but for reasons unknow to the practioner. Trail and error is a very powerful tool if used over over many generations to "solve" a particular problem. But that do not mean anyone know WHY it works.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-27T22:23:29.625Z · LW · GW

I am so thrilled! Daylight saving time got me to experience (kind of) the sleeping beauty problem first hand.

Last night we in Sweden changed our clocks back one hour at 03.00 to 02.00 and went from “summertime” to the dreaded “wintertime”. It’s dreaded because we know what follows with it, ice storms and polar bears in the streets... 

Anyways, I woke up in the middle of the night and I reached for my phone to check what time it was. It was 02.50. Then it struck me. Am I experiencing the first 02.50 or the second 02.50 this night, i.e. have I first slept to 03, then the clock have changed back to 02 (which it automatically does on the phone) and then slept until 02.50 the new time or am I on the first 02.50 and in 10 minutes at 03 the clock will switchback to 02? 

It was a very dizzying thought. I could not for my life say either or. There was nothing in the dark that could give me any indication weather I was experiencing the first or the second 02.50. Then with my thoughts spinning I slowly waited for the clock on my phone to turn 03. When it did, it did not go back to 02, I had experienced the second 02.50 that night.

Comment by Anders Lindström (anders-lindstroem) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-17T21:34:41.834Z · LW · GW

Maybe I was a bit vague. I was trying to say that waking up SB's twin sister on monday was a way of saying that SB's would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday.  For that reason she is only awakened ONE time no matter if it is heads or tails.  She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the "first" time and the "last" time". If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows.

Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you  the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads?

But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?

Comment by Anders Lindström (anders-lindstroem) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T19:23:58.472Z · LW · GW

If the experiment instead was constructed such that:

  1. If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  2. If the coin comes up tails, Sleeping Beauty's twin sister will be awakened and interviewed on Monday and Sleeping Beauty will be awakened and interviewed on Tuesday.

In this case it is "obvious" that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs. 

Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?

Comment by Anders Lindström (anders-lindstroem) on Shortform · 2024-10-09T21:02:17.116Z · LW · GW

I think it is just the cumulative effect that people see yet another prominent AI scientist that "admits" that no one have any clear solution to the possible problem of a run away ASI. Given that the median p(doom) is about 5-10% among AI scientist, people are of course wondering wtf is going on, why are they pursuing a technology with such high risk for humanity if they really think it is that dangerous.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-08T10:35:55.938Z · LW · GW

Congratulations to Geoffrey Hinton and John Hopfield! 
I wonder if Roko's basilisk will spare the Nobel prize committee now: https://www.nobelprize.org/prizes/physics/2024/press-release/

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-04T09:45:03.725Z · LW · GW

That is a very interesting perspective and mindset! Do you  in that scenario think you will focus on value created in terms of solving technical problems or do you think you will focus on "softer" problems that are more human wellbeing centric?

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-04T09:38:39.085Z · LW · GW

Thanks for your input. I really like that you pointed out that AI is just one of many things that could go wrong, perhaps people like me and others are to caught up in the p(doom) buzz that we don't see all the other stuff.

But I wounder one thing about your Plan B, which seems rational, that what if a lot of people have entry-level care work as their back-up. How will you stave of that competition? Or do you think its a matter of avoiding loss aversion and get out of your Plan A game early and not linger (if some pre-stated KPI of yours goes above or below a certain threshold) to grab one of those positions?

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-10-01T00:00:14.376Z · LW · GW

Now when AGI seems to arrive in the near term (2-5 years) and ASI is possibly 5-10 years away (i.e. a few thousand days), what do you personally think will help you stay relevant and in demand? What do you read/watch/study/practice? What skills are you focusing on to sharpen? What "plan B" and "plan C" do you have? What field of work/study would you recommend others to steer away from ASAP?

Asking for a friend...

Comment by Anders Lindström (anders-lindstroem) on What is "True Love"? · 2024-08-20T12:07:56.491Z · LW · GW

I think David Buss book "When men behave badly" is a good starting point to try to understand the dynamics in hetrosexual dating and mating. 

Comment by Anders Lindström (anders-lindstroem) on Rabin's Paradox · 2024-08-15T10:08:36.787Z · LW · GW

Real life translations:

Expected value = That thing that never happens to me unless it is a bad outcome

Loss aversion = Its just the first week of this month and I have already lost 12 arguments, been ripped off twice and have gotten 0 likes on Tinder

A fair coin flip = Life is not fair

Utility function = Static noise

Comment by Anders Lindström (anders-lindstroem) on GPT-4o System Card · 2024-08-09T07:38:31.350Z · LW · GW

Thank you for refreshing my memory

Comment by Anders Lindström (anders-lindstroem) on GPT-4o System Card · 2024-08-08T21:49:55.367Z · LW · GW

Perhaps my memory fails me, but didn´t Anthropic say that they will NOT be the ones pushing the envelope but playing it safe instead? From the METR report : "GPT-4o appeared more capable than Claude 3 Sonnet and GPT-4-Turbo, and slightly less than Claude 3.5 Sonnet."

Comment by Anders Lindström (anders-lindstroem) on AI Rights for Human Safety · 2024-08-03T11:05:52.112Z · LW · GW

Existing legal institutions are unprepared for the AGI world.

 

Every institution is unprepared for the AGI world. And judging from history, laws will always lag behind technological development. I do not think there is much a lawmaker can do than to be reactive to future tech, I think there are just to many "unkown unkowns" to be proactive. Sure you can say "everything is forbidden", but that do not work in reality.  I guess the paradox here is that we want the laws to be stable over time but we also want them to be easy to change on a whim.

Comment by Anders Lindström (anders-lindstroem) on Bryan Johnson and a search for healthy longevity · 2024-07-28T09:50:11.078Z · LW · GW

The question one should perhaps ask is why you would like to live "forever"? Of course I understand the idea of having a healthy body, there is no fun being sick and in pain. But since we have no idea where we came from, where we are, and where we are going, perhaps there is as much meaning in death as we think there is in life. 

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-09T15:54:59.024Z · LW · GW

For starters it could be used as a diplomatic tool with tremendous bargin power as well as a deterrent to anyone that wanted to challenge US post war dominance in all fields. 

Now imagine what a machine that is better in solving any problem in all of science than all the smartest people and scientists in world. Would not this machine give the owners EXTREME advantages in all things related to government/military/intelligence?! 

 

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-09T15:33:36.619Z · LW · GW

Well, how many of in congress and the senate heard about the Manhattan project? 
"Keeping 120,000 people quiet would be impossible; therefore only a small privileged cadre of inner scientists and officials knew about the atomic bomb's development. In fact, Vice-President Truman had never heard of the Manhattan Project until he became President Truman."
https://www.ushistory.org/us/51f.asp

When it comes to the scientists, we have no idea if the work they do in "private" companies is a part of a bigger government led effort. Which would be the most efficient way I suppose.

I don't really understand why some people seem to get so upset about the idea that the government/military is involved in developing cutting edge technology. Like AI is some thing that governments/militaries are not aloud to touch? The military industrial complex has been and will always be involved in this kind of endeavors.  

Comment by Anders Lindström (anders-lindstroem) on An AI Manhattan Project is Not Inevitable · 2024-07-07T10:45:47.522Z · LW · GW

To be clear, I am confident that governments and militaries will be extremely interested in AI.

It makes perfect sense that it will turn into a Manhattan project, and it probably (p>0.999999...) already has. The idea that the government, military, and intelligence agencies have not yet received the memo about AI/AGI/ASI is beyond naive.


Just like the extreme advantages of being the first to develop a nuclear bomb, being the first to achieve AGI might carry the same EXTREME advantages.

Comment by Anders Lindström (anders-lindstroem) on Finding the Wisdom to Build Safe AI · 2024-07-05T12:50:54.645Z · LW · GW

Are we really sure that we should model AI's in the image of humans? We can apparently not align people with people, so why would a human-replica be that different? If we train an AI to behave like a human, why do we expect the AI to NOT behave like a human? Like it or not, but part of what makes us human is lying, stealing, and violence.

"Fifty-two people lost their lives to homicide globally every hour in 2021, says new report from UN Office on Drugs and Crime". https://unis.unvienna.org/unis/en/pressrels/2023/uniscp1165.html

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-28T16:02:52.520Z · LW · GW

I am starting to believe that military use of AI is perhaps the best and fastest way to figure out if large scale AI alignment is possible at all. Since the military will actively seek to develop AI's that kills humans, they must also figure out how not to kill ALL humans. I hope the military will be open with their successes and failures about what works and do no work.

Comment by Anders Lindström (anders-lindstroem) on Childhood and Education Roundup #6: College Edition · 2024-06-26T23:28:00.427Z · LW · GW

The other was taught by a Harvard prof. He informed us TFs that an A is the default grade. A- would require justification.

Great post!

But can this be true? I don't care if it is fair or not to do so. I just wonder if Harvard would be so stupid to destroy their own brand, If people that are hiring Harvard students start to understand that the grades do no reflect the students knowledge in a subject even a little bit it can go south pretty fast.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T22:22:18.803Z · LW · GW

Thanks for pointing me to Zvi's work

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T22:20:16.667Z · LW · GW

Yes, I meant an LLM in the context of a user that fed in a query of his or her problem and got a novel solution back. There is always debatable what a "real" or "hard" problem is, but as a lower bound I mean something that people here at LW would raise an eyebrow or two if the LLM solved. Otherwise there are as you mention plenty of stuff/problems that "custom" AI/machine learning models have solved for a long time.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-23T11:03:44.987Z · LW · GW

I have yet to read a post here on LW were someone write about a frontier model that have solved a "real" problem were the person really had tried for a long(-ish) time to solve it but failed and then the AI solved it for them, like a research problem, a coding problem, a math problem, a medical problem, a personal problem etc. Has anyone experienced this yet?

Comment by anders-lindstroem on [deleted post] 2024-06-22T11:11:29.080Z

What you say in your post is common sense. Unfortunately there is no room for common sense questions in the race for AGI/ASI. From what people in the industry say about their own future predictions, AGI/ASI seems potentially VERY dangerous. However, we can not stop for a second and think/talk about it.

Where I think the biggest misalignment is right now, is not the AI models vs Humans. GPT, Claude, Gemini et al. are all very well aligned with humans. What is NOT aligned at all with humans are the AI companies plans for the future. 

At least in western countries, minute details of civil laws or tax code with a p(doom)<0.00000000000000000001 can be publicly debated for years before they are implemented. But with a technology that by AI insiders have been predicted to have a p(doom)>0.05, we (the people) should just accept that risk and be quiet, because we are too stupid to understand that all deaths are not equal...

Are there any alternatives to AGI/ASI for solving big problems. Here is a quiz:

Cure cancer?
A) AGI/ASI
B) Stopping eating ultra processed food

End world hunger?
A) AGI/ASI
B) Redistribution of food

Stop (human influenced) climate change?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon

Stop microplastic pollution of the oceans?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon

Comment by Anders Lindström (anders-lindstroem) on Ilya Sutskever created a new AGI startup · 2024-06-20T14:57:58.485Z · LW · GW

Come on now, there is nothing to worry about here. They are just going to "move fast and break things"...

Comment by Anders Lindström (anders-lindstroem) on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T11:59:32.197Z · LW · GW

Don't worry, fusion power is just 10 years away...

This paper is perhaps old news for most of you interested in energy, but I find it to be a good conversation starter when it comes to what kind of energy system we should aim for. 

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-20T11:39:59.945Z · LW · GW

For the record. I do not mean to single out Altman. I am talking in general about leading figures (i.e. Altman et al.) in the AI space for which Altman have become a convenient proxy since he is a very public figure. 

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-18T22:07:19.375Z · LW · GW

Current median p(doom) among Ai scientists seem to be 5-10%. How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?

Imagine for a second that I am a world leading scientist dabbling with viruses at home that potentially could give people eternal life and health, but that I publicly would state that "based on my current knowledge and expertise there is maybe a 10% risk that I accidently wipeout all humans in the process, because I have no real idea how to control the virus". Would you then:

A) Call me a reckless idiot, send a SWAT team that put me behind bars, and destroy my lab and other labs that might be dabbling with the same biotech.

B) Say "let the boy play".

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-18T13:15:50.361Z · LW · GW

Of course they are not idiots, but I am talking about the pressure to produce results fast without having doomers and skeptics holding them back. A 1,2 or 3 year delay for one party could mean that they loose. 

If it would have been publicly known that the Los Alamos team were building a new bomb capable of destroying cities and that they were not sure if the first detonation could lead to an uncontrollable chain reaction destroying earth, don't you think there would have been quite a lot of debate and a long delay in the Manhattan project?

If the creation of AGI is one of the biggest events on earth since the advent of life, and that those who get it first can (will) be the all-power full masters, why would that not entice people to take bigger risks than they otherwise would have? 

Comment by Anders Lindström (anders-lindstroem) on OpenAI #8: The Right to Warn · 2024-06-18T10:07:30.926Z · LW · GW

I commented on the post Ilya Sutskever and Jan Leike resign from OpenAI [updated] and received quite a lot disagreement (-18). Since then Aschenbrenner have posted his report and been vocal about his beliefs that it is imperative for the security of US (and the rest of the western world...) that US beats China in the AGI race i.e. AI tech is about military capabilities. 

So, do many of you still disagree with my old comment? If so, then I am curios to know why you believe that what I wrote in the old comment is so far fetched?
 

The old comment:
"Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and  intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior."

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-12T19:30:48.311Z · LW · GW

Just wait until more countries that do not share western values get their hands on tools like this. I think that the only way that social media can survive is mandatory ID. If Airbnb can do it, I am sure Meta, X, Snap etc. etc. can do it. And... call me old fashioned, but I rather not share ANY personal information with ANY intelligence service

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-12T19:24:18.850Z · LW · GW

Huawei claim they are catching up on Nvidia: https://www.huaweicentral.com/ascend-910b-ai-chip-outstrips-nvidia-a100-by-20-in-tests-huawei/

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-06-12T11:30:18.743Z · LW · GW

The heat is on! It seems that the export restrictions on Nvidia GPUs have had little to no effect on Chinese companies abilities to make frontier AI models. What will the next move from US be now?  https://kling-ai.com/

Comment by Anders Lindström (anders-lindstroem) on Just admit that you’ve zoned out · 2024-06-05T15:48:19.602Z · LW · GW

Maybe this post should have been named "Attention is all you need"... Jokes aside, I think we have to be reasonable when it comes to how much information a human can digest in one day. All the emails, memes, papers, Youtube videos, chats, blogs, news etc. takes a toll. So if someone zoned out on you, is it perhaps more a sign of genuine fatigue than genuine disinterest.

Comment by Anders Lindström (anders-lindstroem) on Ilya Sutskever and Jan Leike resign from OpenAI [updated] · 2024-05-18T23:45:49.786Z · LW · GW

Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and  intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior.

Comment by Anders Lindström (anders-lindstroem) on Which skincare products are evidence-based? · 2024-05-03T08:40:10.747Z · LW · GW

You mean in a positive or negative way? Harmful? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5615097/ , and/or useless? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1447210/ 

Comment by Anders Lindström (anders-lindstroem) on Thoughts on seed oil · 2024-05-02T15:05:11.391Z · LW · GW

Some tough love: The only reason a post about seed oil could garner so much interest in a forum dedicated to rational thinking is because many of you are addicted to unhealthy heavily processed crap food that you want to find a rational to keep on eating. 

If this were the 50's a post titled "How many dry martinis are optimal to drink before lunch" would probably have been elicited the same type of speculative wishful thinking in the comment section as this post. You all know what the answer is today to the dry martini question, its: "Zero. If you feel the need to drink alcohol on a daily basis, seek help"

The solution is very simple. Stop eating things you are not suppose to eat instead of hoping for the miracle that your Snickers bar will turn out to be a silver bullet for longevity. If you can not stopping eating things you are not suppose to eat, seek professional help to kick your addiction(s).

Comment by Anders Lindström (anders-lindstroem) on Anxiety vs. Depression · 2024-03-23T15:42:56.667Z · LW · GW

Glad to hear you are doing better!

Ok, that is an interesting route to go. Let "us" know how it goes if you feel for sharing your journey

Comment by Anders Lindström (anders-lindstroem) on Anxiety vs. Depression · 2024-03-17T13:18:13.790Z · LW · GW

Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep 'em coming!

Comment by Anders Lindström (anders-lindstroem) on Highlights from Lex Fridman’s interview of Yann LeCun · 2024-03-14T14:48:30.631Z · LW · GW

Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an "AI-washing" attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is "aligned" to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:

"AI will not make humans redundant."

"AI is not an existential risk."

...

Comment by Anders Lindström (anders-lindstroem) on China-AI forecasts · 2024-02-26T12:56:56.880Z · LW · GW

I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi's rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU's is a boon to him at time being.

Comment by Anders Lindström (anders-lindstroem) on Why you, personally, should want a larger human population · 2024-02-24T12:44:13.488Z · LW · GW

The Paper Clip

Scene: The earth

Characters: A, an anti-humanist

B, a pro-humanist

A: "We need to reduce the population by 90-95% to not deplete all resources and destroy the ecosystem"

B: "We need a larger population so we get more smart people, more geniuses, more productive people"

(Enter ASI)

ASI: "Solved. What else can I help you with today?"

Comment by Anders Lindström (anders-lindstroem) on The One and a Half Gemini · 2024-02-22T14:31:39.395Z · LW · GW

Imagine having a context window that fits something like PubMed or even The Pile (but that's a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its "just" 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.

Comment by Anders Lindström (anders-lindstroem) on When Should Copyright Get Shorter? · 2024-02-21T12:17:35.209Z · LW · GW

Dagon thank you for follow up on my comment,

yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?