Posts

Anders Lindström's Shortform 2024-06-12T11:30:18.621Z

Comments

Comment by Anders Lindström (anders-lindstroem) on AI-enabled coups: a small group could use AI to seize power · 2025-04-17T10:50:27.416Z · LW · GW

The main reason for developing AI in the first place is to make possible what the headline says:  "AI-enabled coups: a small group could use AI to seize power".

AI-enabled coups are a feature, not a bug.

Comment by Anders Lindström (anders-lindstroem) on How Gay is the Vatican? · 2025-04-07T20:25:52.616Z · LW · GW

How about the culture in catholic countries were gays are mistreated and that the culture "demand" young men to to find a wife and get married? One way to opt out of marriage and condemnation for being gay, with your honor intact and that you do not have to reveal your preferences, is to go into priesthood.

Comment by Anders Lindström (anders-lindstroem) on AI 2027: What Superintelligence Looks Like · 2025-04-06T14:07:49.498Z · LW · GW

Yes, sometimes they are slow, other times they are fast. A private effort to build a nuke or go to the moon in the time frames they did would not have been possible.  AFAIK the assumption that Chinese AI development is government directed everyone agrees to, but for some very strange reason people like to think that US AI is directed by a group of quirky nerds that wants to save the world and just happens to get their hands on a MASSIVE amount of compute (worth billions upon billions of dollars). Imagine when the government gets to hear what these nerds are up to in a couple of years...

IF there is any truth to how important the race to AGI/ASI is to win.

THEN governments are the key-players in those races. 

Comment by Anders Lindström (anders-lindstroem) on AI 2027: What Superintelligence Looks Like · 2025-04-04T12:43:41.311Z · LW · GW

News of the new models percolates slowly through the US government and beyond.

 

Well fleshed out scenario, but this kind of assumption is always a dealbreaker for me.

Why would the government not be aware of the development of the mightiest technology and weapon ever created if "we" are aware of it?

Could you please elaborate why you choose to go for the "stupid and uninformed government", instead of the more plausible scenario where the government actually knows exactly what is going on in every step of the process and is the driving force behind it? 

Comment by Anders Lindström (anders-lindstroem) on The Rise of Hyperpalatability · 2025-04-03T11:51:57.015Z · LW · GW

For the majority of human history we lived in a production market for food. We searched for that which tasted well but there was never enough to fill the void. Only the truly elite could afford to import enough food to reach the point of excess.

 

Humanity ~300.000 years. Agriculture ~12.000 years. We have been hunters and gathers for the vast majority of human history.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-04-01T13:23:59.724Z · LW · GW

Come for the game theory, stay for the slot machines...

Oh, April 1st.

Comment by Anders Lindström (anders-lindstroem) on Policy for LLM Writing on LessWrong · 2025-03-27T21:48:08.972Z · LW · GW

Yes, but as I wrote in the answer to habryka (see below), I am not talking about the present moment. I am concerned with the (near) future. With the break neck speed at which AI is moving it wont be long until it will be hopeless to figure out if its AI generated or not. 

So my point and rhetorical question is this: AI is not going to go away. Everyone(!) will use it, all day every day. So instead of trying to come up with arbitrary formulas for how much AI generated content a post can or cannot contain, how can we use AI to the absolute limit to increase the quality of posts and make Lesswrong even better than it already is?!

Comment by Anders Lindström (anders-lindstroem) on Policy for LLM Writing on LessWrong · 2025-03-26T21:25:36.064Z · LW · GW

I know the extremely hard work that a lot of people put into writing their posts, and that the moderators are doing a fantastic job at keeping the standards very high, all of which is much appreciated. Bravo! 

But I assume that this policy change is forward looking and that is what I am talking about, the future. We are at the beginning of something truly spectacular that have already yielded results in certain domains that are nothing less than mind blowing. Text generation is one of those fields which have had extreme progress in just a few years time. If this progress continue (which is likely to assume), very soon text generation will be as good or better than the best human writers in pretty much any field. 

How do you as moderators expect to keep up with this progress if you want to keep the forum "AI free"? Is there anything more concrete than a mere policy change that could be done to nudge people into NOT posting AI generated content? IMHO Lesswrong is a competition in cleaver ideas and smartness, and I think a fair assumption is that if you can get help from AI to reach "Yudkowsky-level" smartness, you will use it no matter what. Its just like when say athletes use PEDs to get an edge. Winning >> Policies

Comment by Anders Lindström (anders-lindstroem) on Policy for LLM Writing on LessWrong · 2025-03-26T19:47:17.245Z · LW · GW

I understand the motif behind the policy change but its unenforceable and carry no sanctions. In 12-24 months I guess it will be very difficult (impossible) to detect AI spamming. The floodgates are open and you can only appeal to peoples willingness to have a real human to human conversation. But perhaps those conversations are not as interesting as talking to an AI? Those who seek peer validation for their cleverness will use all available tools in doing so no matter what policy there is.

 

Comment by Anders Lindström (anders-lindstroem) on Policy for LLM Writing on LessWrong · 2025-03-26T14:45:42.166Z · LW · GW

I unfortunately believe that such policy changes are futile. I agree that right now its possible (not 100% by any means) to detect a sh*tpost, at least within a domain a know fairly well. Remember that we are just at the beginning of Q2 2025. Where are we with this Q2 2026 or Q2 2027? 

There is no other defense for the oncoming AI forum slaughter than that people find it more valuable to express their own true opinions and ideas then to copy paste or let an agent talk for them. 

No policy change is needed, a mindset change is.

Comment by Anders Lindström (anders-lindstroem) on Will Jesus Christ return in an election year? · 2025-03-25T13:29:11.063Z · LW · GW

Spot on!

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-03-19T20:00:50.564Z · LW · GW

Oh, I mean "required" as in to get a degree in a certain subject you need to write a thesis as your rite of passage. 

Yes, you are right. Adept or die. AI can be a wonderful tool for learning but as it is used right now, where everyone have to say that they don´t use it, it beyond silly. I guess there will be some kind of reckoning soon.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-03-19T11:21:43.227Z · LW · GW

With AI's rapid advancements in research and writing capabilities, in what year do you think thesis writing will cease to be required for most BS and MS students? (I.e., effectively being abandoned as a measure of academic proficiency)

Comment by Anders Lindström (anders-lindstroem) on How I've run major projects · 2025-03-18T13:12:30.805Z · LW · GW

By the time you have an AI that can monitor and figure out what you are actually doing (or trying to do) on your screen, you do not need the person. Ain´t worth the hassle to install cameras that will be useless in 12 months time...

Comment by Anders Lindström (anders-lindstroem) on I grade every NBA basketball game I watch based on enjoyability · 2025-03-13T11:29:35.928Z · LW · GW

Cool project, I really like the clean and minimalist design AND functionality! 

Two thoughts: 
5-level ratings. Don't really like 5-level rating systems, cause its so easy to be a "lazy" reviewer and go for a three. I prefer 4 or 6-level rating systems where there is no "lazy" middle ground.

Preferred winner. Most of the time when I watch sports of any sort, I have a preferred winner. Perhaps adding that data point to each game could be interesting to see in the aggregate how that affects the rating you give a game. 

Comment by Anders Lindström (anders-lindstroem) on Self-fulfilling misalignment data might be poisoning our AI models · 2025-03-10T11:33:00.995Z · LW · GW

But how do we know that ANY data is safe for AI consumption? What if the scientific theories that we feed the AI models contain fundamental flaws such that when an AI runs off and do their own experiments in say physics or germline editing based on those theories, it triggers a global disaster? 

I guess the best analogy for this dilemma is "The Chinese farmer" (The old man lost his horse), I think we simple do not know which data will be good or bad in the long run.

Comment by Anders Lindström (anders-lindstroem) on A Bear Case: My Predictions Regarding AI Progress · 2025-03-06T19:44:12.308Z · LW · GW

Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you've in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for "AGI within the next 3-5 years".

Comment by Anders Lindström (anders-lindstroem) on A Bear Case: My Predictions Regarding AI Progress · 2025-03-06T13:36:05.115Z · LW · GW

This might seem like a ton of annoying nitpicking.

 

You don't need to apologize for having a less optimistic view of current AI development. I've never heard anyone driving the hype train apologize for their opinions.

Comment by Anders Lindström (anders-lindstroem) on How to Make Superbabies · 2025-02-21T10:44:11.833Z · LW · GW

I know many of you dream of having an IQ of 300 to become the star researcher and avoid being replaced by AI next year. But have you ever considered whether nature has actually optimized humans for staring at equations on a screen? If most people don’t excel at this, does that really indicate a flaw that needs fixing?

Moreover, how do you know that a higher IQ would lead to a better life—for the individual or for society as a whole? Some of the highest-IQ individuals today are developing technologies that even they acknowledge carry Russian-roulette odds of wiping out humanity—yet they keep working on them. Should we really be striving for more high-IQ people, or is there something else we should prioritize?

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-01-27T13:56:54.124Z · LW · GW

I would like to ask for a favor—a favor for humanity. As the AI rivalry between the US and China has reached new heights in recent days, I urge all parties to prioritize alignment over advancement. Please. We, humanity, are counting on your good judgment.

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-01-22T14:28:09.489Z · LW · GW

Perhaps.

https://www.politico.eu/article/us-elon-musk-troll-donald-trump-500b-ai-plan/

But Musk responded skeptically to an OpenAI press release that announced funding for the initiative, including an initial investment of $100 billion.

“They don’t actually have the money,” Musk jabbed.

In a follow-up post on his platform X, the social media mogul added, “SoftBank has well under $10B secured. I have that on good authority.”

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2025-01-21T23:16:47.040Z · LW · GW

I suppose the $500 Billion AI infrastructure program just announced can lay to rest all speculations that AGI/ASI is NOT a government directed project.

Comment by Anders Lindström (anders-lindstroem) on What’s the short timeline plan? · 2025-01-07T11:14:47.671Z · LW · GW

Communicate the plan with the general public: Morally speaking, I think companies should share their plans in quite a lot of detail with the public.

Yes, I think so too, but it will never happened. AGI/ASI is too valuable to be discussed publicly. I have never ever been given the opportunity to have a say in any other big corporate decision regarding the development of weapons and for sure I will not have it this time either. 

"They" will build the things "they" believe are necessary to protect "the American or Chinese way of life", and "they" will not ask you for permission or your opinion.

Comment by Anders Lindström (anders-lindstroem) on By default, capital will matter more than ever after AGI · 2025-01-02T09:58:15.587Z · LW · GW
  • Money will be able to buy results in the real world better than ever.
  • People's labour gives them less leverage than ever before.
  • Achieving outlier success through your labour in most or all areas is now impossible.
  • There was no transformative leveling of capital, either within or between countries.

If this is the "default" outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled. 

Comment by Anders Lindström (anders-lindstroem) on What Goes Without Saying · 2024-12-26T12:31:58.552Z · LW · GW

Excellent points. My experience is that people in general do not like to think that the things they are doing are could be done in other ways or not at all, because that means that they have to rethink their own role and purpose. 

Comment by Anders Lindström (anders-lindstroem) on Anders Lindström's Shortform · 2024-12-12T13:44:30.387Z · LW · GW

When you predict (either personally or publicly) future dates of AI milestones do you:
 

Assume some version of Moore's "law" e.g. exponential growth.

Or

Assume some near term computing gains e.g. quantum computing, doubly exponential growth.

Comment by Anders Lindström (anders-lindstroem) on Subskills of "Listening to Wisdom" · 2024-12-09T10:02:20.929Z · LW · GW

I'm also excited because, while I think I have most of the individual subskills, I haven't personally been nearly as good at listening to wisdom as I'd like, and feel traction on trying harder.

 

Great post! I personally have a tendency to disregard wisdom because it feels "too easy", that if I am given some advice and it works I think it was just luck or correlation, then I have to go and try "the other way (my way...)" and get a punch in the face from the universe and then be like "ohhh, so that why I should have stuck to the advice". 

Now when I think about it, it might also be because of intellectual arrogance, that I think I am smarter than the advice or the person that gives the advice. 

But I have lately started to think a lot about way we think that successful outcomes require overreaching and burnout. Why do we have to fight so hard for everything and feel kind of guilty if it came to us without much effort? So maybe my failure to heed advices of wisdom is based in a need to achieve (overdo, modify, add, reduce, optimize etc.) rather than to just be.

Comment by Anders Lindström (anders-lindstroem) on A car journey with conservative evangelicals - Understanding some British political-religious beliefs · 2024-12-08T14:08:00.410Z · LW · GW

My experience says otherwise, but it might have happen to stumble on some militant foodies.

Comment by Anders Lindström (anders-lindstroem) on A car journey with conservative evangelicals - Understanding some British political-religious beliefs · 2024-12-08T12:00:08.042Z · LW · GW

"Conservative evangelical Christians spend an unbelievable amount of time focused on God: Church services and small groups, teaching their kids, praying alone and with friends. When I was a Christian I prayed 10s of times a day, asking God for wisdom or to help the person I was talking to. If a zealous Christian of any stripe is comfortable around me they talk about God all the time."

Isn't this true for ALL true believers regardless of conviction? I could easily replace 'Conservative evangelical Christians and God' with 'Foodies and food', 'Teenage girls and influencers', 'Rationalists and logic', 'Gym bros and grams of protein per kg/lb of body mass'. There seems to be something inherent in the will to preach to others out of good will, that we want to share something that we believe would benefit others. The road to hell isn't paved with good intentions for nothing...

Comment by anders-lindstroem on [deleted post] 2024-12-06T10:06:18.871Z

Being tense might also be a direct threat to your health in certain situations. I saw an interview on TV with an expert on hostage situations some 7-10 years ago and he claimed that the number one priority for a captive should be to somehow find a way to relax their body. He said if they are not able to do that, the chance is very high that they will develop PTSD.

Comment by Anders Lindström (anders-lindstroem) on Should there be just one western AGI project? · 2024-12-04T11:07:58.384Z · LW · GW

It would for national security reasons be strange to assume that there already now is no coordination among the US firms. And... are we really sure that China is behind in the AGI race? 

Comment by Anders Lindström (anders-lindstroem) on Estimates of GPU or equivalent resources of large AI players for 2024/5 · 2024-12-01T09:55:59.504Z · LW · GW

And one wonder how much the bottleneck is TSMC (the western "AI-block" have really put a lot of their eggs in one basket...) and how much is customer preference towards Nvidia chips. The chips wars 2025 will be very interesting to follow. Thanks for a good chip summary!

Comment by Anders Lindström (anders-lindstroem) on Estimates of GPU or equivalent resources of large AI players for 2024/5 · 2024-11-30T22:34:04.054Z · LW · GW

What about AMD? I saw that on that latest supercomputer TOP500 list that systems that uses bout AMD CPU and GPU now holds the places 1,2,5,8 and 10, among the top 10 systems. Yes, workloads on these computers are a bit different from a pure GPU training cluster, but still. 

https://top500.org/lists/top500/2024/11/

Comment by Anders Lindström (anders-lindstroem) on Dave Kasten's AGI-by-2027 vignette · 2024-11-28T12:49:42.990Z · LW · GW

Yes, the soon-to-be-here "human level" AGI people talk about is for all intent and purposes ASI. Show me one person who is at the highest expert level on thousands of subjects and that have the content of all human knowledge memorized and can draw the most complex inferences on that knowledge across multiple domains in seconds.

Comment by Anders Lindström (anders-lindstroem) on Counting AGIs · 2024-11-27T22:32:42.463Z · LW · GW

Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI's will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he "hallucinated" a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your "remote worker" a day or 10 to think through a really hard problem, not even the sky will be the limit...

Comment by Anders Lindström (anders-lindstroem) on Counting AGIs · 2024-11-26T21:51:14.057Z · LW · GW

Thanks for writing this post!

I don't know what the correct definition of AGI is, but to me it seems that AGI is ASI. Imagine an AI that is on super expert level in most (>95%) subjects and that have access to pretty much all human knowledge and is capable of digesting millions of tokens at a time and and can draw inferences and conclusions from that in seconds. "We" normally have a handful of real geniuses per generation. So now a simulated person that is like Stephen Hawkings in Physics, Terrence Tao in Math, Rembrandt in painting etc etc, all at the same time. Now imagine that you have "just" 40.000-100.000 of these simulated persons able to communicate at the speed of light and that can use all the knowledge in the world within millisecond. I think there there will be a very transformative experience for our society from the get go.

Comment by Anders Lindström (anders-lindstroem) on A very strange probability paradox · 2024-11-24T14:28:40.639Z · LW · GW

The thing is that, if you roll a 6 and then a non-6, in an "A" sequence you're likely to just die due to rolling an odd number before you succeed in getting the double 6, and thus exclude the sequence from the surviving set; whereas in a "B" sequence there's a much higher chance you'll roll a 6 before dying, and thus include this longer "sequence of 3+ rolls" in the set.

 

Yes! This kind of kills the "paradox". Its approaching an apples and oranges comparison.

Surviving sequences with n=100 rolls (for illustrative purposes)

[6, 6]
[6, 6]
[2, 6, 6]
[6, 6]
[2, 6, 6]
[6, 6]
Estimate for A: 2.333
[6, 6]
[4, 4, 6, 2, 2, 6]
[6, 6]
[6, 2, 4, 4, 6]
[6, 4, 6]
[4, 4, 6, 4, 6]
[6, 6]
[6, 6]
Estimate for B: 3.375

if you rephrase

: The probability that you roll a fair die until you roll two  in a row, given that all rolls were even.

: The probability that you roll a fair die until you roll two non-consecutive  (not necessarily in a row), given that all rolls were even.

This changes the code to:

A_estimate = num_sequences_without_odds/n

B_estimate = num_sequences_without_odds/n

And the result (n=100000) 

Estimate for A: 0.045
Estimate for B: 0.062

I guess this is what most people where thinking when reading the problem, i.e., its a bigger chance of getting two non consecutive 6s. But by the wording (see above) of the "paradox" it gives more rolls on average for the surviving sequences, but you on the other hand have more surviving sequences hence higher probability.

Comment by Anders Lindström (anders-lindstroem) on OpenAI's CBRN tests seem unclear · 2024-11-22T10:02:45.723Z · LW · GW

With the scaling in compute it will not take long until small groups or even a single individual can train or fine tune an open source model to reach o1s level (and beyond). So I am wondering about the data. Does for instance o1 training set in these subjects contain data that is very hard to come by or is it mostly publicly available data? If it is the first, the limiting factor is the access to data and it should be reasonably easy to contain the risks. If it is the latter... O´boy...

Comment by Anders Lindström (anders-lindstroem) on What are the good rationality films? · 2024-11-20T23:10:48.095Z · LW · GW

"The Prime Gig": explores the life of Pendleton "Penny" Wise, a charismatic but morally conflicted telemarketer, as he navigates the cutthroat world of high-stakes sales schemes. Torn between ambition, romance, and integrity, he must decide whether to pursue wealth at the cost of his principles.

https://m.imdb.com/video/embed/vi4227907353/

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:52:33.733Z · LW · GW

I know I am a parrot here, but they are playing two different games. One wants to find One partner and the stop. The other one want to find as many partners as possible. You can not you compare utility across different goals. Yes. The poly person will have higher expected utility, but it is NOT comparable to the utility that the mono person derives. 

The wording should have been:
10% chance of finding a monogamous partner 10 times yields 1 monogamous partners in expectation and 0.63 in expected utility.
Not:
10% chance of finding a monogamous partner 10 times yields 0.63 monogamous partners in expectation.

and:
10% chance of finding a polyamorous partner 10 times yields 1 polyamorous partner in expectation and 1 in expected utility.
instead of:
10% chance of finding a polyamorous partner 10 times yields 1.00 polyamorous partners in expectation.

So there was a mix up in expected number of successes and expected utility.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:09:11.065Z · LW · GW

Yes. But I think you have mixed up expected value and expected utility. Please show your calculations.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T20:04:18.893Z · LW · GW

I do not understand your reasoning. Please show your calculations.

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T16:17:04.478Z · LW · GW

No, I think you are mixing the probability of at least one success in ten trails (with a 10% chance per trail), which is ~0.65=65%, with the expected value which is n=1 in both cases. You have the same chance of finding 1 partner in each case and you do the same number of trails. There is a 65% chance that you have at least 1 success in the 10 trails for each type of partner. The expected outcome in BOTH cases is 1 as in n=1 not 1 as in 100%

Probability of at least one success: ~65%
Probability of at least two success: ~26%

Comment by Anders Lindström (anders-lindstroem) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T13:45:34.134Z · LW · GW

Why would is the expectation to find a polyamorous partner be higher in the case you gave? Same chance per try and same number of tries should equal same expectation.

Comment by Anders Lindström (anders-lindstroem) on The Online Sports Gambling Experiment Has Failed · 2024-11-18T09:38:33.441Z · LW · GW

Good write up! People Cannot Handle "fill in the blank" on smartphones. Sex, food, drugs, social status, betting, binge watching, shopping etc. in abundance and a click away is something we cannot not handle. If some of biggest corporations in the world spends billions upon billions each year to grab our attention, they will win and "you" will on average loose, unless you pull the cord (or turn off the wifi...) or have extreme will power. 

I am definitely not the one to throw the first rock, but is it not pretty embarrassing that most of us who thought we were so smart and independent are mere serfs, both intellectually and physically, to a little piece of electronics that have completely and utterly hijacked our brains and bodies.

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-13T08:08:24.357Z · LW · GW

yeah, that comic summarize it all!

As a side note, I wonder how many would get their PhD degree if the requirement was to publish 3-4 papers (2 single author and 1-2 co-author) were the main result (where it's applicable) needed to have p<0.01? Perhaps the paper publishing frenzy would slow down a little bit if the monograph came into fashion again?

Comment by Anders Lindström (anders-lindstroem) on The lying p value · 2024-11-12T20:43:59.195Z · LW · GW

Agree. I have never understood why p=0.05 is a holy threshold. People (and journals) toss out research if they get p=0.06 but they think they are their way to a Nobel prize with p=0.04. Madness.

Comment by Anders Lindström (anders-lindstroem) on evhub's Shortform · 2024-11-09T18:50:51.100Z · LW · GW

We live in an information society -> "You" are trying to build the ultimate dual use information tool/thing/weapon -> The government require your service. No news there. So why the need to whitewash this? What about this is actually bothering you?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T19:50:56.944Z · LW · GW

I understood why you asked, I am also interested in general why people up or down vote something. I could be a really good information and food for thought.

Yeah, who doesn't want capital T truth... But I have come to appreciate the subjective experience more and more. I like science and rational thinking, it has gotten us pretty far, but who am I the question someones experience. If someone met 'the creator' on an ayahuasca journey or think that love is the essence of universe, who am I to judge. When I see the statistics on the massive use of anti-depressants it is obvious to me that we can´t use rational and logical thinking to think our way out from our feelings. What are rationality and logical thinking good for if it in the end can't make us feel good?

Comment by Anders Lindström (anders-lindstroem) on Does the "ancient wisdom" argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? · 2024-11-07T09:22:18.128Z · LW · GW

When it comes to being up- or down-voted its a real gamble. There are of course the same mechanism at play here just as on other social (media) platforms, i.e., there are certain individuals, trends and thought patterns that are hailed and praised and vice versa without any real justification. But hey, that is what makes us human, these unexplainable things called feelings.

PS perhaps a new hashtag on X would be appropriate #stopthedownvoting