Posts

Comments

Comment by Celenduin (michael-grosse) on The Dunbar Playbook: A CRM system for your friends · 2023-09-16T18:43:39.689Z · LW · GW

Yep, I'm currently finding the balance between adding enough examples to posts and being sufficiently un-perfectionistic that I post at all.

I think it was definitively good that you posted this in its current form, over not posting for want of perfectionism!

As an example which works with integers too: The Decide 10 Rating System. This gives me a sense of the space that is covered by that scale, and it somehow works better for my brain.

Weighted factor modelling sounds interesting and maybe useful, will look into that too. Thanks!

Comment by Celenduin (michael-grosse) on The Dunbar Playbook: A CRM system for your friends · 2023-08-29T07:38:34.007Z · LW · GW

Thank you, upvoted! (with what little Karma that conveys from me)

It will certainly live as an open tab in my browser, but it doesn't feel directly usable for me.

What is especially challenging for me is to assign these "is" and "want" numbers with a consistent meaning. My gut feeling doesn't reliably map to a bare integer. What would help me would be an example (or many examples) of what people mean when a connection to another human feels like a "3" to them, or they want to have a "5" connection, and so on.

Comment by Celenduin (michael-grosse) on The Dunbar Playbook: A CRM system for your friends · 2023-08-29T07:14:42.862Z · LW · GW

ought


Should this be "want" to match the actual column name, both in the template and in the screenshot?

Comment by Celenduin (michael-grosse) on Micro Habits that Improve One’s Day · 2023-07-03T06:43:07.659Z · LW · GW

What is "physical fiction"?

Comment by Celenduin (michael-grosse) on Let's Terraform West Texas · 2022-09-04T19:32:21.938Z · LW · GW

Afaik, the practicability of pump storage is extremely location dependent. Building it on plain land would require moving enormous amounts of soil to create the artificial mountain for it. Also, there is the issue of evaporation.

Another alternative storage method for your scenario to consider would be molten salt storage. Heat up salt with excess energy, and use the hot salt to power a steam turbine when you need the energy back. https://en.wikipedia.org/wiki/Thermal_energy_storage

Comment by Celenduin (michael-grosse) on The case for turning glowfic into Sequences · 2022-08-20T20:41:28.092Z · LW · GW

This would seem to be related to "Knowing when to lose" from HPMOR.

Comment by Celenduin (michael-grosse) on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-08-20T19:38:14.399Z · LW · GW

Is there a dedicated Wiki (or "subject-encyclopedia") for Project Lawful? I feel like collecting dath ilan concepts (like multi-agent-optimal boundary) might be valuable. This could both include an in-universe summary and context of them, and out of universe explanation and references to introductory texts or research papers if needed.

Comment by Celenduin (michael-grosse) on AGI Ruin: A List of Lethalities · 2022-06-08T16:59:17.125Z · LW · GW

One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?

While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.

Not sure if that would be better than our reference scenario of doom or not.

Comment by Celenduin (michael-grosse) on AGI Ruin: A List of Lethalities · 2022-06-08T16:28:31.297Z · LW · GW

On second thought: Don't we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?

Comment by Celenduin (michael-grosse) on AGI Ruin: A List of Lethalities · 2022-06-08T16:23:36.665Z · LW · GW

🤔

Not sure if I'm the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.

So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.

I'm not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?

  • Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
  • then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
  • hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
  • increasing the math as needed
  • finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
    • or otherwise improving the overall situation (convince your congress member of something? run for congress? ...)

Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.

So, I'm probably not the right person to get this off the ground, because I don't have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?

[0] Though, maybe we should also look into a better understanding about who "the right people" are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.

Comment by Celenduin (michael-grosse) on AGI Ruin: A List of Lethalities · 2022-06-07T15:49:27.766Z · LW · GW

I wonder if we could be much more effective in outreach to these groups?

Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians and then work with them / sponsor them?

Or maybe micro-target this demographic with facebook/google/github/stackexchange ads and point them to something?

I don't know, I'm not a marketing person, but I feel like I would have seen much more of these things if we were doing enough of them.

Not saying that this should be MIRI's job, rather stating that I'm confused because I feel like we as a community are not taking an action that would seem obvious to me. Especially given how recent advances in published AI capabilities seem to make the problem even much legible. Is the reason for not doing it really just that we're all a bunch of nerds who are bad at this kind of thing, or is there more to it that I'm missing?

While I see that there is a lot of risk associated with such outreach increasing the amount of noise, I wonder if that tradeoff might be shifting the shorter the timelines are getting and given that we don't seem to have better plans than "having a diverse set of smart people come up with novel ideas of their own in the hope that one of those works out". So taking steps to entice a somewhat more diverse group of people into the conversation might be worth it?

Comment by Celenduin (michael-grosse) on App and book recommendations for people who want to be happier and more productive · 2022-05-01T12:38:26.646Z · LW · GW

The link to your framework for onboarding habits / SEEP is broken. Here is an archived version of that article: https://web.archive.org/web/20211125065547/http://www.katwoods.co/home/june-14th-2019

Comment by Celenduin (michael-grosse) on Ukraine Post #11: Longer Term Predictions · 2022-04-26T20:58:31.308Z · LW · GW

Thank you for providing these updates. Being myself not well versed in reading prediction markets and drawing conclusions from them, I appreciate your perspective on it and you sharing your thoughts behind that perspective.

Comment by Celenduin (michael-grosse) on Ukraine Post #10: Next Phase · 2022-04-13T23:43:01.683Z · LW · GW

I'm seeing quite some reports that the US is supplying loitering munition, specifically Switchblade drones, to Ukraine. Would that fall under your definition of "small drones with AI" or are you thinking of something else?

Comment by Celenduin (michael-grosse) on Predicting a global catastrophe: the Ukrainian model · 2022-04-10T11:15:02.442Z · LW · GW

No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.

By "wrong" here I mean "incorrectly predicted the future". If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future.

Maybe an intuition pump for what I think Christian is pointing at:

  1. Assuming you have a 6-faced die, and you predict that the probability that you next will roll a 6 and not one of the other faces is about 16.67%.
  2. Then you roll the die, and the face with the 6 comes up on top.

Was your prediction wrong?

Comment by Celenduin (michael-grosse) on Ukraine Post #7: Prediction Market Update · 2022-03-30T02:00:12.960Z · LW · GW

Thanks!

Regarding the likelihood of a substantial cease fire soon and Putin's continued presidency: recent news makes it seam to me like Putin's administration could be starting to lay the rhetorical groundwork for an exit. Particularly these bits:
1.: Russia announced that it will reduce its operations around Kyiv. I think I read somewhere that they claimed something like "The attack on Kyiv was only made in order to bind Ukranian troops there." but I can't find the source now.
2.: Focussing on the Donetsk region. Actually getting control there seems realistic?
3.: Donetsk separatists thinking about a referendum about joining Russia

This might be an exit strategy that maybe could be spun internally in a way that saves Putin's face: "The primary goal of the Special Military Operation was to stop the Ukrainian genocide of Russian civilians in Donetsk. We achieved that goal! Mission accomplished 🎉! In cooperation with the government in Donetsk, we will leave several battalions of soldiers in the Donetsk area, in order to protect the Russians there from future atrocities."

And given that Putin massively expanded his administration's media control and police state in the last month, I would expect that internally he can make that work without anyone important daring to claim that Eurasia was not always at war with Eastasia that they had very radically different war goals in the beginning.

These things together could to me contribute to explain why "Kyiv falls" and "Putin remains president" seem to have become less linked over time.

Thoughts?



PS: Your moderation guidelines say "Norm Enforcing - I try to enforce particular rules" - Where can I read what particular norms you are enforcing? I don't want to break any out of ignorance.

Comment by Celenduin (michael-grosse) on I left Russia on March 8 · 2022-03-11T09:10:57.133Z · LW · GW

This encounter with the guards at the border sounds scary. I'm glad you got through safely.
I hope your new location can provide some respite to you and your family 🌸

Comment by Celenduin (michael-grosse) on What's the easiest way to currently generate images with machine learning? · 2022-02-26T16:59:38.703Z · LW · GW

Note that there also exists a web version of the WOMBO Dream app: https://app.wombo.art/