Posts

Apply to the 2025 PIBBSS Summer Research Fellowship 2024-12-24T10:25:12.882Z
Retrospective: PIBBSS Fellowship 2024 2024-12-20T15:55:24.194Z
Announcing the PIBBSS Symposium '24! 2024-09-03T11:19:47.568Z
[Closed] PIBBSS is hiring in a variety of roles (alignment research and incubation program) 2024-04-09T08:12:59.241Z
Retrospective: PIBBSS Fellowship 2023 2024-02-16T17:48:32.151Z
PIBBSS Speaker events comings up in February 2024-02-01T03:28:24.971Z
Apply to the 2024 PIBBSS Summer Research Fellowship 2024-01-12T04:06:58.328Z
AI Safety Hub Serbia Official Opening 2023-10-28T17:03:34.607Z
AI Safety Hub Serbia Soft Launch 2023-10-20T07:11:48.389Z
Announcing new round of "Key Phenomena in AI Risk" Reading Group 2023-10-20T07:11:09.360Z
Become a PIBBSS Research Affiliate 2023-10-10T07:41:02.037Z
PIBBSS Summer Symposium 2023 2023-09-02T17:22:44.576Z
EA/ACX/LW Belgrade June Meet-up 2023-05-24T10:58:00.000Z
Announcing the 2023 PIBBSS Summer Research Fellowship 2023-01-12T21:31:53.026Z
EA Serbia 3rd meet up 2022-11-28T18:58:00.000Z
EA/ACX/LW Belgrade November Meet-up 2022-11-02T01:24:00.000Z

Comments

Comment by DusanDNesic on “Charity” as a conflationary alliance term · 2024-12-15T22:39:55.720Z · LW · GW

Excellent article, and helpful by introducing vocabulary that makes me think things which I was trying to understand. Perhaps it should be cross posted to EA Forum?

Comment by DusanDNesic on Alexander Gietelink Oldenziel's Shortform · 2024-11-30T22:39:51.614Z · LW · GW

Future wars are about to look very silly.

Comment by DusanDNesic on Live Machinery: An Interface Design Philosophy for Wholesome AI Futures · 2024-11-05T08:40:47.145Z · LW · GW

I'm very sad I cannot attend at that time, but I am hyped about this and believe it to be valuable, so I am writing this endorsement as a signal to others. I've also recommended this to some of my friends, but alas UK visa is hard to get on such short notice. When you run it in Serbia, we'll have more folks from the eastern bloc represented ;)

Comment by DusanDNesic on Could randomly choosing people to serve as representatives lead to better government? · 2024-10-23T11:36:19.720Z · LW · GW

I think an important thing here is:

A random person gets selected for office. Maybe they need to move to the capital city, but their friends are still "back home." Once they serve their term, they will want to come back to their community most likely. So lobbying needs to be able to pay to get you out of your community, break all your bonds and all that during your short stint in power. Currently, politicians slowly come to power and their social clique is used to being lobbies and getting rich and selling out ideals.

This would cut down on corruption a lot (see also John Huang's comment https://www.lesswrong.com/posts/veebprDdTbq2Xmnyj/could-randomly-choosing-people-to-serve-as-representatives?commentId=NEtq8QtayXZY5a38J) and would undo a lot of the damage done from politicians not having to live normal lives under the current system.

Comment by DusanDNesic on Advice for journalists · 2024-10-13T22:45:11.824Z · LW · GW

Apologies, typo in the original, I do think it's not charity to not increase publicity, the post was missing a "not". Your response still clarified your position, but I do disagree - common courtesy is not the same as charity, and expecting it is not unreasonable. I feel like not publishing our private conversation (whether you're a journalist or not) falls under common courtesy or normal behaviour rather than "charity". Standing more than a 1 centimeter away from you when talking is not charity just because it's technically legal - it's a normal and polite thing to do, so when someone comes super close to my face when talking I have the right to be surprised and protest. Escalating publicity is like escalating intimacy in this example.

Comment by DusanDNesic on Advice for journalists · 2024-10-13T14:54:37.891Z · LW · GW

I feel like if someone internalized "treat every conversation with people I don't know as if they may post it super publicly - and all of this is fair game", we would lose a lot of commons, and your quality of life and discourse your would go down. I don't think it's "charity" to [EDIT: not] increase the level of publicity of a conversation, whether digital or in person. I think drawing a parallel with in person conversation is especially enlightening - imagine we were having a conversation in a room with CCTV (you're aware it's recorded, but believe it to be private). Me taking that recording and playing it on local news is not just "uncharitable" - it's wrong in a way which degrades trust.

Comment by DusanDNesic on "25 Lessons from 25 Years of Marriage" by honorary rationalist Ferrett Steinmetz · 2024-10-04T14:41:50.110Z · LW · GW

Amazing recommendation which I very much enjoyed, thanks for sharing!

Comment by DusanDNesic on MATS Alumni Impact Analysis · 2024-10-02T05:59:53.856Z · LW · GW

Amazing write-up, thank you for the transparency and thorough work of documenting your impact.

Comment by DusanDNesic on Implications of China's recession on AGI development? · 2024-09-29T08:57:32.671Z · LW · GW

[Epistemic status: somewhat informed speculation] TLDR: I do not believe China was a major threat source, recession makes it slightly less likely they will be one too. Conventional wars are more likely to happen, and their effect on AI development is uncertain.


I generally do not think China is a big of a threat in the AGI race as some others (notably Aschenbrenner) think. I think for AGI to be first developed in China, several factors need to be true: China has more centralized compute available than other countries, open models are near the frontier but not over the AGI limit, and China's attitude towards developing AGI shifts (possibly due to race dynamics). I think for compute they are currently not on track, for frontier models there is a lag, and attitude is towards trying not to develop AGI, at least publicly and it seems also privately as far as we can glimpse. While the Chinese public is more techno-optimistic than the US, the CCP is leaning towards engineers rather than politicians, and senior advisors in AI are AI-pilled.

The current recession in China is due to a set of complex causes, but it's a mix of politics and economics, and politics are quite slow to budge. I don't want to get too much into it, but the banking sector is stretched thin with a lot of workers unable to pay back mortgages on apartments which were not completed due to real-estate developers building too much real estate and ending up holding the bag with many unsold apartments - with most of them being second apartments, so not necessities but "investments". This is causing a loop of bankruptcies which is hard to stop, and has led to overall pessimism over the future. Lowering of the interest rates and making money available to banks has caused loans to be available, but people are skeptical to take them due to what they perceive as an uncertain future. CCP is likely to work on things which make the future more certain, large infrastructure projects such as bridges and dams as they have historically done, at least for some time. Nuclear power plants and hydroelectric dams definitely will qualify, but enormous compute clusters (using which chips? overpriced smuggled ones?) will likely not.

That is not to say that, if it seems like US is racing towards AGI and is reaping benefits from advanced AI, China will not put all the resources of a centralized government into catching up - and that can be quite a few resources since they can comandeer private enterprise or property to do so. If countries of the world play it sane, actually negotiate international limits, and meet China where they want to be met (CCP has many reasons not to want AGI) I do not expect China to be a threat to existence directly.

Recession is also more likely to make China want to blame bad economic results on foreign influence, and perhaps more likely to stoke international conflicts directly. I am personally not likely to want to live in a country bordering China in the next 10 years. How this will influence AGI is tough to predict - more resources spent on war means less on AI development, unless AI development is essential for a warfare edge, in which case we should expect a boom in AI development. The earlier the conflict happens, the less likely AI is to play a major role in warfare.

Comment by DusanDNesic on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-09-29T08:08:18.010Z · LW · GW

I agree with the spirit of what you are saying but I want to register a desire for "long timelines" to mean ">50 years" or "after 2100". In public discourse, heading Yann LeCunn say something like "I have long timelines, by which I mean, no crazy event in the next 5 years" - it's simply not what people think when they think long timelines, outside of the AI sphere.

Comment by DusanDNesic on Why I funded PIBBSS · 2024-09-19T10:36:16.660Z · LW · GW

Hi! Thanks for the kind words and for sharing your thought process so clearly! I am also quite happy to see discussions on PIBBSS' mission and place in the alignment ecosystem, as we have been rethinking PIBBSS outbound comms since the introduction of the board and executive team.

Regarding the application selection process:

Currently (scroll down to see stages 1-4), it comes down to having a group of people who understand PIBBSS (in addition to the Board, this would be alumni, mentors, and people who have worked with PIBBSS extensively before) looking through CVs, Letters of motivation, and later work trials in the form of research proposals and research consolidation. After that, we do interviews and mentor-matching and then make our final decision. This has so far worked for our scope (as we grew in popularity, we also raised our bar, so the number of people passing the first selection stage has stayed the same through the past two years). So, it works, but if we were to scale the Fellowship (not obvious if we would like to do so) this system would need to become more robust. For Affiliates, the selection process is different, focusing much more on a proven track record of excellent research, and due to very few positions we can offer, it is currently a combination of word-of-mouth recommendations, and very limited public rounds. This connects with the project we started internally, “Horizon Scanning”, which makes reports on different research agendas and finds interesting researchers in the field which may make for great Affiliates. The first report should be out in the next month, so we will see how this interacts and how useful the reports are to the community (and to the fields which we hope to bridge with AI Safety). Again, as we scale, this will require rethinking.

Thank you again for the write-up and your support! Huge thanks also to all the commenters here; we really appreciate the thoughtful discussion!

Comment by DusanDNesic on Reformative Hypocrisy, and Paying Close Enough Attention to Selectively Reward It. · 2024-09-11T20:00:39.987Z · LW · GW

A "Short-term Honesty Sacrifice", "Hypocrisy Gambit", something like that?

Comment by DusanDNesic on What is "True Love"? · 2024-08-19T19:36:29.985Z · LW · GW

There's also something like "just the right amount of friction" which enables true love to happen without being sabotaged by existing factors. There are things which cause relationship-breaking kind of issues, such as permanent long distance, disagreement on how many kids to have and when and how to raise them, how to earn and spend money, religion and morals, work/life balance stuff, and physical attraction. Then there's the fun kind of friction where you can grow from each other or enjoy your differences - things would be bland without these. There's also something "true" about intent to grow together and trust each other to change each others' values, so that you start converging over time and becoming more similar. Something like access to my core which I intentionally share trusting that the other person will use it for good. Yeah, many pointers to the underlying concept, good luck in the dating market.

Comment by DusanDNesic on Principled Satisficing To Avoid Goodhart · 2024-08-18T15:22:06.693Z · LW · GW

Thank you for the great write-up. It's the kind of thing I believe and act upon but said in a much clearer way than I could, and that to me has enormous value. I especially appreciate the nuance in the downsides of the view, not too strong nor too weak in my view. And I also love the point of "yeah, maybe it doesn't work for perfect agents with infinite compute in a vacuum, but maybe that's not what'll happen, and it works great for regular bounded agents such as myself n=1 and that's maybe enough?" Anyhow, thank you for writing up what feels like an important piece of wisdom.

Comment by DusanDNesic on WTH is Cerebrolysin, actually? · 2024-08-13T07:18:42.449Z · LW · GW

I had no idea, thanks for sharing! My mother in law was GP in public hospital in Kamchatka and she's super against homeopathy so I assumed things there are like things here on Serbia (some private "doctors" deal with homeopathy but no one else). Your comment does explain a thing which I didn't understand which is why in Russia I saw so much homeopathy sold in packaging very similar to regular medicine.

Comment by DusanDNesic on Decomposing Agency — capabilities without desires · 2024-07-17T13:59:25.786Z · LW · GW

To answer things which Raymond did not, it is hard for me to say who has the agenda which you think has good chances for solving alignment. I'd encourage you to reaching out to people who pass your bar perhaps more frequently than you do and establish those connections. Your limits on no audio or video do make it hard to participate in something like the PIBBSS Fellowship, but perhaps worth taking a shot at it or others. See if people whose ideas you like are mentoring in some programs - getting to work with them in structured ways may be easier than otherwise.

Comment by DusanDNesic on DM Parenting · 2024-07-17T10:05:40.558Z · LW · GW

Love it! As a DM and parent (albeit of a 1 years old) reading this really made me smile and think through all the things I have in the house that I can design games around :) Thank you for the write-up!

Comment by DusanDNesic on Decomposing Agency — capabilities without desires · 2024-07-17T07:46:11.515Z · LW · GW

This sounds a bit like davidad's agenda in ARIA, except you also limit the AI to only writing provable mathematical solutions to mathematical questions to begin with. In general, I would say that you need possibly better feedback loops than that, possibly by writing more on LW, or consulting with more people, or joining a fellowship or other programs.

Comment by DusanDNesic on On saying "Thank you" instead of "I'm Sorry" · 2024-07-10T18:59:03.335Z · LW · GW

To add to the anecdata, I've heard it advised (like Raemon below) and started using it occasionally. It has been good for me, although not transformative - possibly I come from different baseline of how important the change is, I don't apologise constantly, but as I've learned, it used to be more than I should.

Comment by DusanDNesic on The Incredible Fentanyl-Detecting Machine · 2024-07-03T20:08:42.494Z · LW · GW

Hmm, but that has trade-off with not showing up as suspect to X-ray. So maybe a mix of approaches makes it quite expensive to smuggle drugs and thus limit supply/raise price/drop consumption

Comment by DusanDNesic on Are most people deeply confused about "love", or am I missing a human universal? · 2024-05-25T22:05:37.530Z · LW · GW

If all that is lost could be defined, it would, by definition, not be lost once definition is expanded that much.

There is this video: https://youtu.be/OfgVQKy0lIQ on why Asian parents don't say "I love you" to their kids, and it analyzes how the same word in different languages has different meaning. I would also add - to different people as well. So whatever you classify is always missing something in the gaps. It's the issue of legibilizing (in Seeing Like a State terms) - in trying to define it, you restrict it to only those things.

A lot of the meaning of the word Love is contained within me, with my emotions, with my messy mind thinking fuzzy thoughts. If I restricted it to only defined categories I am bound to lose something. Instead, I enjoy the fullness of it by keeping it ill defined and exploring it's multitudes.

Perhaps it's simply the case that the answer is "you are missing a human universal" to the question in the topic. If you tried to define humour, analyze jokes, divide them in categories, and extract the hormones triggered in response to some stimuli caused by a certain joke, I would say you did not (on a certain level) understand humour better than a child who made a good joke and enjoy a good laugh.

Finale example I heard recently brought up again is Mary's room knowledge argument - no amount of classification of blue, understanding of light spectrum data etc replaces the experience of seeing blue. Likewise with love.

To bring it back to your original question about understanding it in order to communicate to others - this is less found in books and more in self exploration through relationships with others. (I speak from perspective of someone in a happy long term romantic relationship with 0 issues and best communication I can imagine, none of which came from books on either of our sides).

Comment by DusanDNesic on Are most people deeply confused about "love", or am I missing a human universal? · 2024-05-24T11:13:37.437Z · LW · GW

I'm not sure - in dissecting the Frog something is lost while knowledge is gained. If you do not see how analysis of things can sometimes (not always!) diminish them, then that may be the crux. I agree with Wbrom above - some things in human experience are irreducible, and sometimes trying to get to a more atomic level means that you lose a lot in the process, in the gaps between the categories.

Comment by DusanDNesic on Deep Honesty · 2024-05-09T08:53:07.270Z · LW · GW

This sounds like a case of the Rule of Equal and Opposite Advice: https://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/ I'm sure for some people more honesty would be harmful, but it does sound like the caveats here make it clear when not to use it. I more agree with questions Tsvi raises in the other thread than with "this is awful advice". I can imagine that you are a person for whom more honesty is bad, although if you followed the caveats above it would be imo quite rare to do it wrong. I think the authors do a good job of outlining many cases where it goes wrong.

Comment by DusanDNesic on ACX Covid Origins Post convinced readers · 2024-05-01T20:21:42.181Z · LW · GW

Is a lot of the effect not "people who read ACX trust Scott Alexander"? Like, the survey selects for most "passionate" readers, those willing to donate their free time to Scott for research with ~nothing in return. Him publicly stating on his platform "I am now much less certain of X" is likely to make that group of people be less certain of X?

Comment by DusanDNesic on Believing In · 2024-02-11T09:50:16.856Z · LW · GW

Great post Anna, thanks for writing - it makes for good thinking.

It reminds me of The Use and Abuse of Witchdoctors for Life by Sam[]zdat, in the Uruk Series (which I highly recommend). To summarize, our modern way of thinking denies us the benefits of being able to rally around ideas that would get us to better equilibria. By looking at the priest calling for spending time in devoted prayer with other community members and asking, "What for?" we end up losing the benefits of community, quiet time, and meditation. While we are closer to truth (in territory sense), we lost something, and it takes conscious effort to realize it is missing and replace it. It is describing the community version of the local problem of a LessWronger not committing to a friendship because it is not "true" - in marginal cases, believing in it can make it true! 

(I recommend reading the whole series, or at least the article above, but the example it gives is "Gri-gri." "In 2012, the recipe for gri-gri was revealed to an elder in a dream. If you ingest it and follow certain ritual commandments, then bullets cannot harm you." - before reading the article, think about how belief in elders helps with fighting neighboring well-armed villages)

Comment by DusanDNesic on Scale Was All We Needed, At First · 2023-12-31T14:34:39.947Z · LW · GW

I assume Jan 1st 2025 is the natural day for a sequel :D

Comment by DusanDNesic on Defense Against The Dark Arts: An Introduction · 2023-12-31T13:42:49.803Z · LW · GW

Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three

Some people on this website get that for some topics, acoup blog does that for history, etc, but it's really rare, and mostly you end up with "listen to Radio Liberty and Pravda and figure out the truth if you can."

On a style side, I agree with other commenters that you have selected something where even after all the reading I am severely not convinced your criticism is correct under every possible frame. Picking something like a politician talking about the good they have done, despite actually being corrupt or something much more narrow in focus and black-and-white, leaving you much less surface to defend. Here, it took a lot of text, I am unsure what techniques I have learned since your criticisms require more effort to again check for validity. You explained that sunk cost fallacy pushed you for this example, but it's still not too late to add a different example, put this one into Google doc and make it optional reading and note your edit. People may read this in the future, and no reason not to ease the concept for them!

Comment by DusanDNesic on Defense Against The Dark Arts: An Introduction · 2023-12-29T16:19:30.881Z · LW · GW

On phone, don't know how to format block quotes but: My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?".

This was exactly what I did, such a missed opportunity!!

I also agree with other things you said, and to contribute a useful phrase, your response to BS: " is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take how seriously, I track how much people are able to engage with other worldviews, and which worldviews hold up and don't require avoidance techniques in order to preserve the worldview." Sounds a bit like Epistemic Learned Helplessness by Scott: https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/ Which I think is good when you are not in a live debate - saying "I dunno, maybe" and then later spending time thinking about it and researching it to see if the argument is true or not, meanwhile not updating.

Comment by DusanDNesic on E.T. Jaynes Probability Theory: The logic of Science I · 2023-12-28T11:07:46.766Z · LW · GW

Thank you for this - this is not a book I would generally pick up in my limited reading time, but this has clarified a lot of terms and thinking around probabilities!

Comment by DusanDNesic on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T11:49:26.530Z · LW · GW

My experience is much like this (for context I've spoken about AIS to common public, online but mostly offline, to audiences from students to politicians). The more poetic, but also effective and intuitive way to call this out (while sacrificing some accuracy but I think not too much) is: "we GROW AI". It puts it in categories with genetic engineering and pharmaceutical fairly neatly, and shows the difference between PowerPoint and ChatGPT in how they are made and why we don't know how it works. It is also more intuitive compared to "black box" which is a more technical term and not widely known.

Comment by DusanDNesic on Announcing new round of "Key Phenomena in AI Risk" Reading Group · 2023-11-08T14:10:56.424Z · LW · GW

Hello Gabriel! We plan to run this group ~3 times a year, so you should be able to apply for next round, around January/February, which would start in Feb/March. (not confirmed, just estimates).

Comment by DusanDNesic on Alignment Implications of LLM Successes: a Debate in One Act · 2023-10-23T18:50:23.807Z · LW · GW

Other comments did a great job of thoughtful critique of content but I must say that I also highly enjoyed the style, along with the light touch of Russian character writing style.

Comment by DusanDNesic on PIBBSS Summer Symposium 2023 · 2023-09-28T19:05:29.062Z · LW · GW

Thanks Daniel! Most talks should be available soon (except the ones we do not have permission to post)

Comment by DusanDNesic on Barriers to Mechanistic Interpretability for AGI Safety · 2023-08-29T20:22:03.832Z · LW · GW

Even for humans - are my nails me? Once clipped, are they me? Is my phone me? I feel like my phone is more me than my hair, for example. Is my child me, are my memes me, is my country me, etc etc... There are many reasons why agent boundaries are problematic, and that problem continues in AI Safety research.

Comment by DusanDNesic on Discussion about AI Safety funding (FB transcript) · 2023-05-03T23:07:22.811Z · LW · GW

I agree, but AIS jobs are usually fairly remote-friendly (unlike many corporate jobs) and the culture is better than in most universities that I've worked with, so it has many non-wage perks. Question is, can people in cheap cost-of-living places find such high paid work? In Eastern Europe, usually no - there are other people willing/able to work for less so all wages are low, cost of living correlates with wages in that sense too. So giving generous salaries to experts that are in/are willing to relocate to lower cost of living places is cost-effective, insofar as they are currently an underutilized group. I know in EE there are people who would make for good researchers, but are unaware of the problems, salary landscape and such, which is something we're trying to fix (and global awareness of AI is helping a lot).

Comment by DusanDNesic on Discussion about AI Safety funding (FB transcript) · 2023-05-01T11:44:11.593Z · LW · GW

Perhaps not all of them are in the Bay Area/London? 150k per year can buy you three top professors from Eastern European Universities to work for you full time, and be happy about it. Sure, other jobs pay more, but when unconstrained from living in an expensive city, these grants actually go quite far. (We're toying with ideas of opening research hubs outside of most expensive hubs in the world, exactly for that reason)

Comment by DusanDNesic on Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship · 2023-01-25T14:42:03.069Z · LW · GW

For those interested, PIBBSS is happening again in 2023, see more details here in LessWrong format, or on our website, if you want to apply.

Comment by DusanDNesic on [deleted post] 2023-01-16T23:29:07.099Z

Hello Ishan! This is lovely work, thank you for doing it!

Quick question - we (EA Serbia) are translating AGISF (2023) into Serbian (and making it readable to speakers of many related languages). Do I have your permission to translate your summary, to be used as keynotes for the facilitators in the region, or students after completing the course? We would obviously give credit to you and would be linking to this post as the original. We would not need to start now (possibly mid-February or so), and we would wait for the 2023 version to be up to date with the course we are translating.

Thanks! 

(P.S. you may also want to answer the question of whether are you happy for it to be translated into any language, as a blank cheque of approval to translators from other countries ;) )

Comment by DusanDNesic on EA Serbia 3rd meet up · 2022-12-02T21:29:07.706Z · LW · GW

I have not read it, but it seems useful to come with that knowledge! :)

Thanks, the topic arose from the discussion we had last time on biorisks, if you have topics you want to explore, bring them to the meeting to suggest for January!