Posts

PIBBSS is hiring in a variety of roles (alignment research and incubation program) 2024-04-09T08:12:59.241Z
Retrospective: PIBBSS Fellowship 2023 2024-02-16T17:48:32.151Z
PIBBSS Speaker events comings up in February 2024-02-01T03:28:24.971Z
Apply to the PIBBSS Summer Research Fellowship 2024-01-12T04:06:58.328Z
AI Safety Hub Serbia Official Opening 2023-10-28T17:03:34.607Z
AI Safety Hub Serbia Soft Launch 2023-10-20T07:11:48.389Z
Announcing new round of "Key Phenomena in AI Risk" Reading Group 2023-10-20T07:11:09.360Z
Become a PIBBSS Research Affiliate 2023-10-10T07:41:02.037Z
PIBBSS Summer Symposium 2023 2023-09-02T17:22:44.576Z
EA/ACX/LW Belgrade June Meet-up 2023-05-24T10:58:00.000Z
Announcing the 2023 PIBBSS Summer Research Fellowship 2023-01-12T21:31:53.026Z
EA Serbia 3rd meet up 2022-11-28T18:58:00.000Z
EA/ACX/LW Belgrade November Meet-up 2022-11-02T01:24:00.000Z

Comments

Comment by DusanDNesic on Believing In · 2024-02-11T09:50:16.856Z · LW · GW

Great post Anna, thanks for writing - it makes for good thinking.

It reminds me of The Use and Abuse of Witchdoctors for Life by Sam[]zdat, in the Uruk Series (which I highly recommend). To summarize, our modern way of thinking denies us the benefits of being able to rally around ideas that would get us to better equilibria. By looking at the priest calling for spending time in devoted prayer with other community members and asking, "What for?" we end up losing the benefits of community, quiet time, and meditation. While we are closer to truth (in territory sense), we lost something, and it takes conscious effort to realize it is missing and replace it. It is describing the community version of the local problem of a LessWronger not committing to a friendship because it is not "true" - in marginal cases, believing in it can make it true! 

(I recommend reading the whole series, or at least the article above, but the example it gives is "Gri-gri." "In 2012, the recipe for gri-gri was revealed to an elder in a dream. If you ingest it and follow certain ritual commandments, then bullets cannot harm you." - before reading the article, think about how belief in elders helps with fighting neighboring well-armed villages)

Comment by DusanDNesic on Scale Was All We Needed, At First · 2023-12-31T14:34:39.947Z · LW · GW

I assume Jan 1st 2025 is the natural day for a sequel :D

Comment by DusanDNesic on Defense Against The Dark Arts: An Introduction · 2023-12-31T13:42:49.803Z · LW · GW

Finding reliable sources is 99% of the battle, and I have yet to find one which would for sure pass the "too good to check" situation: https://www.astralcodexten.com/p/too-good-to-check-a-play-in-three

Some people on this website get that for some topics, acoup blog does that for history, etc, but it's really rare, and mostly you end up with "listen to Radio Liberty and Pravda and figure out the truth if you can."

On a style side, I agree with other commenters that you have selected something where even after all the reading I am severely not convinced your criticism is correct under every possible frame. Picking something like a politician talking about the good they have done, despite actually being corrupt or something much more narrow in focus and black-and-white, leaving you much less surface to defend. Here, it took a lot of text, I am unsure what techniques I have learned since your criticisms require more effort to again check for validity. You explained that sunk cost fallacy pushed you for this example, but it's still not too late to add a different example, put this one into Google doc and make it optional reading and note your edit. People may read this in the future, and no reason not to ease the concept for them!

Comment by DusanDNesic on Defense Against The Dark Arts: An Introduction · 2023-12-29T16:19:30.881Z · LW · GW

On phone, don't know how to format block quotes but: My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?".

This was exactly what I did, such a missed opportunity!!

I also agree with other things you said, and to contribute a useful phrase, your response to BS: " is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take how seriously, I track how much people are able to engage with other worldviews, and which worldviews hold up and don't require avoidance techniques in order to preserve the worldview." Sounds a bit like Epistemic Learned Helplessness by Scott: https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/ Which I think is good when you are not in a live debate - saying "I dunno, maybe" and then later spending time thinking about it and researching it to see if the argument is true or not, meanwhile not updating.

Comment by DusanDNesic on E.T. Jaynes Probability Theory: The logic of Science I · 2023-12-28T11:07:46.766Z · LW · GW

Thank you for this - this is not a book I would generally pick up in my limited reading time, but this has clarified a lot of terms and thinking around probabilities!

Comment by DusanDNesic on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T11:49:26.530Z · LW · GW

My experience is much like this (for context I've spoken about AIS to common public, online but mostly offline, to audiences from students to politicians). The more poetic, but also effective and intuitive way to call this out (while sacrificing some accuracy but I think not too much) is: "we GROW AI". It puts it in categories with genetic engineering and pharmaceutical fairly neatly, and shows the difference between PowerPoint and ChatGPT in how they are made and why we don't know how it works. It is also more intuitive compared to "black box" which is a more technical term and not widely known.

Comment by DusanDNesic on Announcing new round of "Key Phenomena in AI Risk" Reading Group · 2023-11-08T14:10:56.424Z · LW · GW

Hello Gabriel! We plan to run this group ~3 times a year, so you should be able to apply for next round, around January/February, which would start in Feb/March. (not confirmed, just estimates).

Comment by DusanDNesic on Alignment Implications of LLM Successes: a Debate in One Act · 2023-10-23T18:50:23.807Z · LW · GW

Other comments did a great job of thoughtful critique of content but I must say that I also highly enjoyed the style, along with the light touch of Russian character writing style.

Comment by DusanDNesic on PIBBSS Summer Symposium 2023 · 2023-09-28T19:05:29.062Z · LW · GW

Thanks Daniel! Most talks should be available soon (except the ones we do not have permission to post)

Comment by DusanDNesic on Barriers to Mechanistic Interpretability for AGI Safety · 2023-08-29T20:22:03.832Z · LW · GW

Even for humans - are my nails me? Once clipped, are they me? Is my phone me? I feel like my phone is more me than my hair, for example. Is my child me, are my memes me, is my country me, etc etc... There are many reasons why agent boundaries are problematic, and that problem continues in AI Safety research.

Comment by DusanDNesic on Discussion about AI Safety funding (FB transcript) · 2023-05-03T23:07:22.811Z · LW · GW

I agree, but AIS jobs are usually fairly remote-friendly (unlike many corporate jobs) and the culture is better than in most universities that I've worked with, so it has many non-wage perks. Question is, can people in cheap cost-of-living places find such high paid work? In Eastern Europe, usually no - there are other people willing/able to work for less so all wages are low, cost of living correlates with wages in that sense too. So giving generous salaries to experts that are in/are willing to relocate to lower cost of living places is cost-effective, insofar as they are currently an underutilized group. I know in EE there are people who would make for good researchers, but are unaware of the problems, salary landscape and such, which is something we're trying to fix (and global awareness of AI is helping a lot).

Comment by DusanDNesic on Discussion about AI Safety funding (FB transcript) · 2023-05-01T11:44:11.593Z · LW · GW

Perhaps not all of them are in the Bay Area/London? 150k per year can buy you three top professors from Eastern European Universities to work for you full time, and be happy about it. Sure, other jobs pay more, but when unconstrained from living in an expensive city, these grants actually go quite far. (We're toying with ideas of opening research hubs outside of most expensive hubs in the world, exactly for that reason)

Comment by DusanDNesic on Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship · 2023-01-25T14:42:03.069Z · LW · GW

For those interested, PIBBSS is happening again in 2023, see more details here in LessWrong format, or on our website, if you want to apply.

Comment by DusanDNesic on [deleted post] 2023-01-16T23:29:07.099Z

Hello Ishan! This is lovely work, thank you for doing it!

Quick question - we (EA Serbia) are translating AGISF (2023) into Serbian (and making it readable to speakers of many related languages). Do I have your permission to translate your summary, to be used as keynotes for the facilitators in the region, or students after completing the course? We would obviously give credit to you and would be linking to this post as the original. We would not need to start now (possibly mid-February or so), and we would wait for the 2023 version to be up to date with the course we are translating.

Thanks! 

(P.S. you may also want to answer the question of whether are you happy for it to be translated into any language, as a blank cheque of approval to translators from other countries ;) )

Comment by DusanDNesic on EA Serbia 3rd meet up · 2022-12-02T21:29:07.706Z · LW · GW

I have not read it, but it seems useful to come with that knowledge! :)

Thanks, the topic arose from the discussion we had last time on biorisks, if you have topics you want to explore, bring them to the meeting to suggest for January!