Posts
Comments
yeah, that comic summarize it all!
As a side note, I wonder how many would get their PhD degree if the requirement was to publish 3-4 papers (2 single author and 1-2 co-author) were the main result (where it's applicable) needed to have p<0.01? Perhaps the paper publishing frenzy would slow down a little bit if the monograph came into fashion again?
Agree. I have never understood why p=0.05 is a holy threshold. People (and journals) toss out research if they get p=0.06 but they think they are their way to a Nobel prize with p=0.04. Madness.
We live in an information society -> "You" are trying to build the ultimate dual use information tool/thing/weapon -> The government require your service. No news there. So why the need to whitewash this? What about this is actually bothering you?
I understood why you asked, I am also interested in general why people up or down vote something. I could be a really good information and food for thought.
Yeah, who doesn't want capital T truth... But I have come to appreciate the subjective experience more and more. I like science and rational thinking, it has gotten us pretty far, but who am I the question someones experience. If someone met 'the creator' on an ayahuasca journey or think that love is the essence of universe, who am I to judge. When I see the statistics on the massive use of anti-depressants it is obvious to me that we can´t use rational and logical thinking to think our way out from our feelings. What are rationality and logical thinking good for if it in the end can't make us feel good?
When it comes to being up- or down-voted its a real gamble. There are of course the same mechanism at play here just as on other social (media) platforms, i.e., there are certain individuals, trends and thought patterns that are hailed and praised and vice versa without any real justification. But hey, that is what makes us human, these unexplainable things called feelings.
PS perhaps a new hashtag on X would be appropriate #stopthedownvoting
Some stuff just works but for reasons unknow to the practioner. Trail and error is a very powerful tool if used over over many generations to "solve" a particular problem. But that do not mean anyone know WHY it works.
I am so thrilled! Daylight saving time got me to experience (kind of) the sleeping beauty problem first hand.
Last night we in Sweden changed our clocks back one hour at 03.00 to 02.00 and went from “summertime” to the dreaded “wintertime”. It’s dreaded because we know what follows with it, ice storms and polar bears in the streets...
Anyways, I woke up in the middle of the night and I reached for my phone to check what time it was. It was 02.50. Then it struck me. Am I experiencing the first 02.50 or the second 02.50 this night, i.e. have I first slept to 03, then the clock have changed back to 02 (which it automatically does on the phone) and then slept until 02.50 the new time or am I on the first 02.50 and in 10 minutes at 03 the clock will switchback to 02?
It was a very dizzying thought. I could not for my life say either or. There was nothing in the dark that could give me any indication weather I was experiencing the first or the second 02.50. Then with my thoughts spinning I slowly waited for the clock on my phone to turn 03. When it did, it did not go back to 02, I had experienced the second 02.50 that night.
Maybe I was a bit vague. I was trying to say that waking up SB's twin sister on monday was a way of saying that SB's would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday. For that reason she is only awakened ONE time no matter if it is heads or tails. She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the "first" time and the "last" time". If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows.
Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads?
But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?
If the experiment instead was constructed such that:
- If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
- If the coin comes up tails, Sleeping Beauty's twin sister will be awakened and interviewed on Monday and Sleeping Beauty will be awakened and interviewed on Tuesday.
In this case it is "obvious" that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs.
Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?
I think it is just the cumulative effect that people see yet another prominent AI scientist that "admits" that no one have any clear solution to the possible problem of a run away ASI. Given that the median p(doom) is about 5-10% among AI scientist, people are of course wondering wtf is going on, why are they pursuing a technology with such high risk for humanity if they really think it is that dangerous.
Congratulations to Geoffrey Hinton and John Hopfield!
I wonder if Roko's basilisk will spare the Nobel prize committee now: https://www.nobelprize.org/prizes/physics/2024/press-release/
That is a very interesting perspective and mindset! Do you in that scenario think you will focus on value created in terms of solving technical problems or do you think you will focus on "softer" problems that are more human wellbeing centric?
Thanks for your input. I really like that you pointed out that AI is just one of many things that could go wrong, perhaps people like me and others are to caught up in the p(doom) buzz that we don't see all the other stuff.
But I wounder one thing about your Plan B, which seems rational, that what if a lot of people have entry-level care work as their back-up. How will you stave of that competition? Or do you think its a matter of avoiding loss aversion and get out of your Plan A game early and not linger (if some pre-stated KPI of yours goes above or below a certain threshold) to grab one of those positions?
Now when AGI seems to arrive in the near term (2-5 years) and ASI is possibly 5-10 years away (i.e. a few thousand days), what do you personally think will help you stay relevant and in demand? What do you read/watch/study/practice? What skills are you focusing on to sharpen? What "plan B" and "plan C" do you have? What field of work/study would you recommend others to steer away from ASAP?
Asking for a friend...
I think David Buss book "When men behave badly" is a good starting point to try to understand the dynamics in hetrosexual dating and mating.
Real life translations:
Expected value = That thing that never happens to me unless it is a bad outcome
Loss aversion = Its just the first week of this month and I have already lost 12 arguments, been ripped off twice and have gotten 0 likes on Tinder
A fair coin flip = Life is not fair
Utility function = Static noise
Thank you for refreshing my memory
Perhaps my memory fails me, but didn´t Anthropic say that they will NOT be the ones pushing the envelope but playing it safe instead? From the METR report : "GPT-4o appeared more capable than Claude 3 Sonnet and GPT-4-Turbo, and slightly less than Claude 3.5 Sonnet."
Existing legal institutions are unprepared for the AGI world.
Every institution is unprepared for the AGI world. And judging from history, laws will always lag behind technological development. I do not think there is much a lawmaker can do than to be reactive to future tech, I think there are just to many "unkown unkowns" to be proactive. Sure you can say "everything is forbidden", but that do not work in reality. I guess the paradox here is that we want the laws to be stable over time but we also want them to be easy to change on a whim.
The question one should perhaps ask is why you would like to live "forever"? Of course I understand the idea of having a healthy body, there is no fun being sick and in pain. But since we have no idea where we came from, where we are, and where we are going, perhaps there is as much meaning in death as we think there is in life.
For starters it could be used as a diplomatic tool with tremendous bargin power as well as a deterrent to anyone that wanted to challenge US post war dominance in all fields.
Now imagine what a machine that is better in solving any problem in all of science than all the smartest people and scientists in world. Would not this machine give the owners EXTREME advantages in all things related to government/military/intelligence?!
Well, how many of in congress and the senate heard about the Manhattan project?
"Keeping 120,000 people quiet would be impossible; therefore only a small privileged cadre of inner scientists and officials knew about the atomic bomb's development. In fact, Vice-President Truman had never heard of the Manhattan Project until he became President Truman."
https://www.ushistory.org/us/51f.asp
When it comes to the scientists, we have no idea if the work they do in "private" companies is a part of a bigger government led effort. Which would be the most efficient way I suppose.
I don't really understand why some people seem to get so upset about the idea that the government/military is involved in developing cutting edge technology. Like AI is some thing that governments/militaries are not aloud to touch? The military industrial complex has been and will always be involved in this kind of endeavors.
To be clear, I am confident that governments and militaries will be extremely interested in AI.
It makes perfect sense that it will turn into a Manhattan project, and it probably (p>0.999999...) already has. The idea that the government, military, and intelligence agencies have not yet received the memo about AI/AGI/ASI is beyond naive.
Just like the extreme advantages of being the first to develop a nuclear bomb, being the first to achieve AGI might carry the same EXTREME advantages.
Are we really sure that we should model AI's in the image of humans? We can apparently not align people with people, so why would a human-replica be that different? If we train an AI to behave like a human, why do we expect the AI to NOT behave like a human? Like it or not, but part of what makes us human is lying, stealing, and violence.
"Fifty-two people lost their lives to homicide globally every hour in 2021, says new report from UN Office on Drugs and Crime". https://unis.unvienna.org/unis/en/pressrels/2023/uniscp1165.html
I am starting to believe that military use of AI is perhaps the best and fastest way to figure out if large scale AI alignment is possible at all. Since the military will actively seek to develop AI's that kills humans, they must also figure out how not to kill ALL humans. I hope the military will be open with their successes and failures about what works and do no work.
The other was taught by a Harvard prof. He informed us TFs that an A is the default grade. A- would require justification.
Great post!
But can this be true? I don't care if it is fair or not to do so. I just wonder if Harvard would be so stupid to destroy their own brand, If people that are hiring Harvard students start to understand that the grades do no reflect the students knowledge in a subject even a little bit it can go south pretty fast.
Thanks for pointing me to Zvi's work
Yes, I meant an LLM in the context of a user that fed in a query of his or her problem and got a novel solution back. There is always debatable what a "real" or "hard" problem is, but as a lower bound I mean something that people here at LW would raise an eyebrow or two if the LLM solved. Otherwise there are as you mention plenty of stuff/problems that "custom" AI/machine learning models have solved for a long time.
I have yet to read a post here on LW were someone write about a frontier model that have solved a "real" problem were the person really had tried for a long(-ish) time to solve it but failed and then the AI solved it for them, like a research problem, a coding problem, a math problem, a medical problem, a personal problem etc. Has anyone experienced this yet?
What you say in your post is common sense. Unfortunately there is no room for common sense questions in the race for AGI/ASI. From what people in the industry say about their own future predictions, AGI/ASI seems potentially VERY dangerous. However, we can not stop for a second and think/talk about it.
Where I think the biggest misalignment is right now, is not the AI models vs Humans. GPT, Claude, Gemini et al. are all very well aligned with humans. What is NOT aligned at all with humans are the AI companies plans for the future.
At least in western countries, minute details of civil laws or tax code with a p(doom)<0.00000000000000000001 can be publicly debated for years before they are implemented. But with a technology that by AI insiders have been predicted to have a p(doom)>0.05, we (the people) should just accept that risk and be quiet, because we are too stupid to understand that all deaths are not equal...
Are there any alternatives to AGI/ASI for solving big problems. Here is a quiz:
Cure cancer?
A) AGI/ASI
B) Stopping eating ultra processed food
End world hunger?
A) AGI/ASI
B) Redistribution of food
Stop (human influenced) climate change?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon
Stop microplastic pollution of the oceans?
A) AGI/ASI
B) Stop buying every little thing you see on Alibaba and Amazon
Come on now, there is nothing to worry about here. They are just going to "move fast and break things"...
Don't worry, fusion power is just 10 years away...
This paper is perhaps old news for most of you interested in energy, but I find it to be a good conversation starter when it comes to what kind of energy system we should aim for.
For the record. I do not mean to single out Altman. I am talking in general about leading figures (i.e. Altman et al.) in the AI space for which Altman have become a convenient proxy since he is a very public figure.
Current median p(doom) among Ai scientists seem to be 5-10%. How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?
Imagine for a second that I am a world leading scientist dabbling with viruses at home that potentially could give people eternal life and health, but that I publicly would state that "based on my current knowledge and expertise there is maybe a 10% risk that I accidently wipeout all humans in the process, because I have no real idea how to control the virus". Would you then:
A) Call me a reckless idiot, send a SWAT team that put me behind bars, and destroy my lab and other labs that might be dabbling with the same biotech.
B) Say "let the boy play".
Of course they are not idiots, but I am talking about the pressure to produce results fast without having doomers and skeptics holding them back. A 1,2 or 3 year delay for one party could mean that they loose.
If it would have been publicly known that the Los Alamos team were building a new bomb capable of destroying cities and that they were not sure if the first detonation could lead to an uncontrollable chain reaction destroying earth, don't you think there would have been quite a lot of debate and a long delay in the Manhattan project?
If the creation of AGI is one of the biggest events on earth since the advent of life, and that those who get it first can (will) be the all-power full masters, why would that not entice people to take bigger risks than they otherwise would have?
I commented on the post Ilya Sutskever and Jan Leike resign from OpenAI [updated] and received quite a lot disagreement (-18). Since then Aschenbrenner have posted his report and been vocal about his beliefs that it is imperative for the security of US (and the rest of the western world...) that US beats China in the AGI race i.e. AI tech is about military capabilities.
So, do many of you still disagree with my old comment? If so, then I am curios to know why you believe that what I wrote in the old comment is so far fetched?
The old comment:
"Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior."
Just wait until more countries that do not share western values get their hands on tools like this. I think that the only way that social media can survive is mandatory ID. If Airbnb can do it, I am sure Meta, X, Snap etc. etc. can do it. And... call me old fashioned, but I rather not share ANY personal information with ANY intelligence service
Huawei claim they are catching up on Nvidia: https://www.huaweicentral.com/ascend-910b-ai-chip-outstrips-nvidia-a100-by-20-in-tests-huawei/
The heat is on! It seems that the export restrictions on Nvidia GPUs have had little to no effect on Chinese companies abilities to make frontier AI models. What will the next move from US be now? https://kling-ai.com/
Maybe this post should have been named "Attention is all you need"... Jokes aside, I think we have to be reasonable when it comes to how much information a human can digest in one day. All the emails, memes, papers, Youtube videos, chats, blogs, news etc. takes a toll. So if someone zoned out on you, is it perhaps more a sign of genuine fatigue than genuine disinterest.
Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike's comment and observing Open AI's behavior.
You mean in a positive or negative way? Harmful? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5615097/ , and/or useless? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1447210/
Some tough love: The only reason a post about seed oil could garner so much interest in a forum dedicated to rational thinking is because many of you are addicted to unhealthy heavily processed crap food that you want to find a rational to keep on eating.
If this were the 50's a post titled "How many dry martinis are optimal to drink before lunch" would probably have been elicited the same type of speculative wishful thinking in the comment section as this post. You all know what the answer is today to the dry martini question, its: "Zero. If you feel the need to drink alcohol on a daily basis, seek help"
The solution is very simple. Stop eating things you are not suppose to eat instead of hoping for the miracle that your Snickers bar will turn out to be a silver bullet for longevity. If you can not stopping eating things you are not suppose to eat, seek professional help to kick your addiction(s).
Glad to hear you are doing better!
Ok, that is an interesting route to go. Let "us" know how it goes if you feel for sharing your journey
Hey Sable, I am sorry about your situation. Perhaps I am pointing out the obvious, but you just achieved something. You wrote a post and people are reading it. Keep 'em coming!
Good that you mention it and did NOT get down voted. Yet. I have noticed that we are in the midst of an "AI-washing" attack which is also going on here on lesswrong too. But its like asking a star NFL quarterback if he thinks they should ban football because the risk of serious brain injuries, of course he will answer no. The big tech companies pours trillions of dollars into AI so of course they make sure that everyone is "aligned" to their vision and that they will try to remove any and all obstacles when it comes to public opinion. Repeat after me:
"AI will not make humans redundant."
"AI is not an existential risk."
...
I am not so sure that Xi would like to get to AGI any time soon. At least not something that could be used outside of a top secret military research facility. Sudden disruptions in the labor market in China could quickly spell the end of his rule. Xi's rule is based on the promise of stability and increased prosperity so I think that the export ban of advanced GPU's is a boon to him at time being.
The Paper Clip
Scene: The earth
Characters: A, an anti-humanist
B, a pro-humanist
A: "We need to reduce the population by 90-95% to not deplete all resources and destroy the ecosystem"
B: "We need a larger population so we get more smart people, more geniuses, more productive people"
(Enter ASI)
ASI: "Solved. What else can I help you with today?"
Imagine having a context window that fits something like PubMed or even The Pile (but that's a bit into the future...), what would you be able to find in there that no one could see using traditional literature review methods? I guess that today a company like Google could scale up this tech and build a special purpose supercomputer that could handle a 100-1000 millions token context window if they wanted, or perhaps they already have one for internal research? its "just" 10x+ of what they said they have experimented with, with no mentions of any special purpose built tech.
Dagon thank you for follow up on my comment,
yes, they are in some ways oranges and apples but both of them put a limit on your possibility to create things. One can argue that immaterial rights have been beneficial for humanity as a whole, but it is at the same time criminalizing one of our most natural instincts which is to mimic and copy what other humans do to increase our chance of survival. Which lead to the next question, would people stop innovate and create if they could not protect it?