Posts

Become a Superintelligence Yourself 2025-05-10T20:20:30.157Z
Yaroslav Granowski's Shortform 2025-05-09T20:43:37.647Z
Humans vs LLM, memes as theorems 2025-05-09T13:26:34.043Z
Healing powers of meditation or the role of attention in humoral regulation. 2025-05-08T06:48:19.068Z

Comments

Comment by Yaroslav Granowski (yaroslav-granowski) on skybluecat's Shortform · 2025-05-21T13:47:37.721Z · LW · GW

Of course. And this is what many good rehabilitation programs do.

But the mere distraction is again, only a temporary solution. Patients need to relearn healthy behavioral patterns, otherwise they may fall back eventually.

Games are good in that sense that they provide a quick feedback loop. You had a problem and quickly solved it without a drug.

Comment by Yaroslav Granowski (yaroslav-granowski) on skybluecat's Shortform · 2025-05-21T13:14:11.614Z · LW · GW

I knew closely several opiod addicted people and had myself addicted to nicotine. Physical withdrawal symptoms is only a small part of the problem in both cases. Although I tend to agree with you on this part:

withdrawal doesn't create pain, but simply amplifies and turns attention to small pains and discomforts that are already there, but normal people just don't notice or get used to ignoring

You really can thoughen up and endure days to weeks of the physical withdraval, but then you have to deal with the months to years of the psychological addiction.

Opiod addiction is like a short circuit in motivation: Normally, when some problem bothers you, you are motivated to solve it. Opioids give an illusion of all problems disappearing, and teach people this flawed behavioral pattern: Instead of solving the actual problem, just take a dose. And this becomes a vicious cycle: addicted person spends all money on drugs, it produces more problems and more urge to solve them with taking more drugs. Planning horizon reduces to hours. Some prefer to steal money to get a doze even knowing that they will be caught the same day.

Comment by Yaroslav Granowski (yaroslav-granowski) on Yaroslav Granowski's Shortform · 2025-05-21T05:53:21.201Z · LW · GW

While the alignment community is frantically trying to convince themselves of the possibility of benevolent artificial superintelligence, the human cognition research remains undeservedly neglected.
Modern AI models are predominantly based on neural-networks, which is the so-called connectionist approach in cognitive architecture studies. But in the beginning, the symbolic approach was more popular because of the lesser computational demands. Logic programming was the means to imbue the system with the programmer's intelligence.
Although symbolist AI researchers have studied the work of the human brain, their research was driven by attempts to reproduce the work of the brain, to create an artificial personality, rather than help programmers expressing their thoughts. The user's ergonomics were largely ignored. Logic programming languages aimed to be the closest representation of the programmer's thoughts. But they failed at being practically convenient. As a result, nobody is using vanilla logic programming for practical means.
In contrast to that, my research is driven by ergonomics and attempts to synchronize with the user's thinking. For example, while proving a theorem (creating an algorithm), instead of manually composing plain texts of sophisticated language, the user sees the current context and chooses the next step from available options.

Comment by Yaroslav Granowski (yaroslav-granowski) on The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda · 2025-05-18T05:33:16.179Z · LW · GW

Here is another neglected approach:

There is a research startup in ergonomics of logic programming that aims for augmented cognition while still relying on traditional interfaces with the computer.

More on this in my article: Become a Superintelligence Yourself

Comment by yaroslav-granowski on [deleted post] 2025-05-12T16:00:39.510Z

I've got the timely answer from EA lead manager Caleb Parikh at EA forums

Things are as they are supposed to be. Sorry for misunderstanding.

Comment by Yaroslav Granowski (yaroslav-granowski) on Better Air Purifiers · 2025-05-12T08:14:23.133Z · LW · GW
  • The filter moves a lot of air, so you need quite long pipes.

Speed matter only in the need for bigger pipe diameter. Impedance depend on the ratio of length to diameter. 

  • You need pipes on both the input and output

Indeed.

  • Some of the noise will be vibration of the purifier body, so you might need to enclose that too

Depend on the quality of the fun bearing. Vibration of the air does transmit to the body and to the pipes, but very littlle. And if you can make an extra casing around, if you need.

 

This is how I was doing DIY purifier (first version of it) with a fun in the middle of pierced steel 60mm pipe. Wet charcoal on gauze wound around the pipe. Inside the pipe was a layer of reticulated polyurethane foam acting both as a filter and sound absorber. The thing afterwards put into 150mm pipe with a layer of polyurethane foam.

And that was quite quiet. The problem was in the quality of the charcoal giving a smell. So I reassembled it for better throughput and use in the kitchen without muffling.

And for the room I use this ugly and loud, but powerfull setup for short periods of cleaner air outside:

Comment by Yaroslav Granowski (yaroslav-granowski) on Better Air Purifiers · 2025-05-11T23:51:19.899Z · LW · GW

Why not? If it has a good fun, the most of the noise will be airborn, and so you only need to attach mufflers to inlet and outlet. But perhaps there can be difficulties if you didn't want to disasseble it.

I did for myself DIY purifier and used extremely noisy high-pitched 40mm fun for server units at 18000rpm. High frequencies even better get absorbed by the muffler. And due to the small size, these fans can produce high pressure at slow troughput. So can save on filter materials. Although didn't finish it since activated charcoal I bought was of low quality. Using it in the kitchen only. If only I could insert pictures in the comment. 

Comment by Yaroslav Granowski (yaroslav-granowski) on Better Air Purifiers · 2025-05-11T20:32:26.224Z · LW · GW

If you don't care much about compactness, you can easily make a DIY muffler: A long pipe with a sound absorber layer. Sound wave goes along the pipe. Its outer part gets absorbed and due to the diffraction inner part spreads. The impedance is exponential, or in decibells is proportional with respect to the length of the pipe.

Comment by Yaroslav Granowski (yaroslav-granowski) on Better Air Purifiers · 2025-05-11T20:17:46.433Z · LW · GW

It feels pretty inefficient to me. Filters are too big a resistance for air flow in this case and it will rather swirl around fan blades.

Why not attach filters right to the fan blades or even instead of them?
 

Comment by Yaroslav Granowski (yaroslav-granowski) on Programming Language Early Funding? · 2025-05-11T15:55:44.353Z · LW · GW

Also, academic research is funding languages like Haskell, Scala, Lean, etc.

I don't know about others, but Lean was funded not by academics. It was funded by a private fund ConvergentResearch. They target somewhere between academics and VC. I know this because Lean is the closest analogy to my project, although with a different mission. I tried to apply, but they didn't reply. Maybe they will find your proposal more appealing.

Anyway, the emergence of such funds as ConvergentResearch and OpenPhilanthropy is a promising trend.

Comment by Yaroslav Granowski (yaroslav-granowski) on About Q Home · 2025-05-11T13:11:20.931Z · LW · GW
Comment by Yaroslav Granowski (yaroslav-granowski) on Q Home's Shortform · 2025-05-11T12:46:28.303Z · LW · GW

Fellow russian researcher here. I doubt small donations can work. Not many people use crypto (I'm same noob and not sure if it's worthy to learn it until I have big audience.) I tried to register with OpenCollective so that they would collect money for me before I leave Russia but they rejected despite me having a project to showcase.

If you have something to show, you could try applying for research grants. This is what I'm trying to do. But that only makes sense if you have time and not so desperate about nearest future.

Comment by Yaroslav Granowski (yaroslav-granowski) on Viliam's Shortform · 2025-05-11T11:54:11.669Z · LW · GW

If those stories were about pursuit of truth, like Archimedes's eureka in a bathtub, they could motivate students and teach some lessons of rationality thinking.

And the history itself could be much more interesting subject if it were teaching some real wisdom rather than demanding stupid memorization of dates and places.

Comment by Yaroslav Granowski (yaroslav-granowski) on A Proposal for Recognizing Nonstandard Intelligences · 2025-05-11T07:06:13.763Z · LW · GW

I do this all the time. But perhaps I should stay away from this conversation. Bad for the karma.

Comment by Yaroslav Granowski (yaroslav-granowski) on A Proposal for Recognizing Nonstandard Intelligences · 2025-05-11T06:03:19.531Z · LW · GW

Would it have made a difference if the post was about them instead of me? 

Not at all.

Additionally, if someone is truly struggling to communicate, or has been misunderstood their whole life, how would anyone with a biased mindset even hear their side of the story?

Why would anyone want to hear it? What's in it for them?

To me it looks like you appeal to pity and guilt of privileged.

Comment by Yaroslav Granowski (yaroslav-granowski) on Yaroslav Granowski's Shortform · 2025-05-10T20:48:59.852Z · LW · GW

Done: Become a Superintelligence Yourself

Comment by Yaroslav Granowski (yaroslav-granowski) on Yaroslav Granowski's Shortform · 2025-05-09T21:09:51.615Z · LW · GW

Thank you, I had to clarify better.

Maybe the cyborgs are closer, but without physical implants, only as advanced forms of software, like knowledge databases

Comment by Yaroslav Granowski (yaroslav-granowski) on A Proposal for Recognizing Nonstandard Intelligences · 2025-05-09T21:06:35.641Z · LW · GW

I'm the same newbie as you and hardly can pass for a gatekeeper, rather myself struggling for getting heard.

But we all have limited processing capacity and have to prioritize what to direct our attention to.

Comment by Yaroslav Granowski (yaroslav-granowski) on Yaroslav Granowski's Shortform · 2025-05-09T20:43:37.647Z · LW · GW

Is there anything on LessWrong about human-based superintelligence? I'm a newbie and about to write a lengthy post about. But the idea seems pretty obvious and is likely to be expressed before somewhere.

Comment by Yaroslav Granowski (yaroslav-granowski) on A Proposal for Recognizing Nonstandard Intelligences · 2025-05-09T14:17:06.669Z · LW · GW

Rationalism, if universal, must recognize minds that don’t conform in form but do in structure. As we prepare for alignment with nonhuman intelligences, distinguishing unfamiliar expression from flawed reasoning becomes critical, not optional.

Rationalism, first of all, must find efficient solutions to relevant problems.

Trying to understand someone out of pity isn't the question of rationalism, but rather of humanitarian nature.

But if collaboration is worthy the effort of trying to understand, then pity is not necessary. So, my advice: set focus not on your personal issues, but on discussing what would be interesting to others.

Comment by Yaroslav Granowski (yaroslav-granowski) on Orienting Toward Wizard Power · 2025-05-08T16:48:46.731Z · LW · GW

Seek wizard power, not king power.

What about wiseman power?

If a king is marching in front of the crowd where it pushes him, and the wizard gets busy with random things like inventing toothbrushes, who is going to tell them all what to do?

I mean the importance of the priority management. Spending much time for getting better toothbrush or pants doesn't look rational.

Comment by Yaroslav Granowski (yaroslav-granowski) on Hard Questions Are Language Bugs · 2025-05-08T16:07:16.453Z · LW · GW

can think “X is Y” and hear somebody say “X is not Y” or “X doesn’t exist” and instead of arguing, I can remember that “both X and Y don’t exist” and internally hug whatever part of my brain has been scarred by the impression that X and Y are indeed things.

I think the key is in the definition of X.

For example, every natural number is either odd or even.

Try thinking about this from the statistician point of view. Like when you are observing some data, your brain recognizes certain patterns and correlations between them. Positive correlations are like "Every even number is also a natural number". How does brain notice them? By noticing that whenever something matches one pattern, it also matches another.

But how should brain notice negative correlations? Upon something matching one pattern, trying matching to all the others? Like if "something is an odd number", is it also "a sort of wine", etc? I think, it doesn't work this way.

I think, disjunctive syllogisms are more natural for the brain. Like when it notices first that "some natural numbers are also odd numbers" and "some natural numbers are also even numbers". To combine such rules is much cheaper to notice that "every natural number is either an odd or an even but never the both".

Comment by Yaroslav Granowski (yaroslav-granowski) on Open Thread Spring 2025 · 2025-05-07T07:11:50.320Z · LW · GW

I wish I had discovered LessWrong earlier in my life. But perhaps I wouldn’t be able to appreciate it back then. This was always my curse: I couldn’t see the wisdom in other people’s words until learned it the hard way.

I always believed intelligence to be the most advantageous trait, and if I failed at something, it was only because I was not smart enough. Or smart enough to avoid strings attached.

Born in Russia and self-educated, I tried to emigrate with no preparations. Without a degree or much money, I could only get tourist visas and had no means to settle anywhere, traveling mainly through Southeast Asia countries.

However, 7 years abroad turned out to be the best thing I could do for personal growth. I’ve met many interesting people from all around the world, immersed myself in local cultures and religious practices. Buddhism helped to improve my introspective skills and Islam explained the importance of priority management. Eventually, immigration authorities tired of such a vagabond tourist, and I had to return to Russia with the hope of one day qualifying for talent visas.

I tried myself in writing but couldn’t even get any feedback from the readers. But improved writing skills helped me to win an InnoCentive challenge. Prize money allowed to dedicate several years to an ambitious software project and, with improved EI, I finally made something cool. Upon reaching a proof-of-concept, I emerged from the coding intending to get funding and dive back into coding. But I’ve faced the same problem of promoting my work.

As it didn’t go viral with the target group of software engineers, I tried to appeal to the vision behind the project, which is essentially an alternative to AI. Trying to promote it in AI-related subreddits, I figured that AI-alignment is a kind of related subject; and in r/ControlProblem, I stumbled upon links to LessWrong.

So, this is how I’ve got here. But despite the mentioned primary goal, I think, maybe I’ll make my first post about a highly speculative subject of healing powers of meditation in the words of correlation between attention and humoral regulation. Basically, to rewrite my old article and see if I got the spirit of LessWrong right :)