Posts

pranomostro's Shortform 2024-01-11T18:13:31.382Z
Berlin AI Alignment Open Meetup April 2023 2023-04-23T16:11:36.273Z
Berlin AI Alignment Open Meetup January 2023 2023-01-18T12:20:32.685Z
Berlin AI Alignment Open Meetup December 2022 2022-12-14T14:17:36.523Z
Berlin AI Alignment Open Meetup October 2022 2022-10-26T16:08:55.931Z
EA Berlin Public Meetup October 2022 2022-10-07T10:11:09.397Z
Berlin AI Alignment Open Meetup September 2022 2022-09-21T15:06:35.110Z
Berlin AI Safety Open Meetup July 2022 2022-07-06T12:41:33.677Z
Berlin AI Safety Open Meetup June 2022 2022-06-15T14:33:26.757Z
[Rough notes, BAIS] Human values and cyclical preferences 2022-05-13T13:28:04.907Z
AI Safety Berlin Kickoff Meetup 2022-04-18T16:55:53.359Z
Munich SSC/LW Meetup March 2021 2021-03-05T13:59:20.245Z
Munich SSC/LW Meetup February 2021 2021-02-06T11:25:05.483Z
Munich SSC/LW Meetup January 2021 2021-01-12T20:54:51.361Z
Munich SSC/LW Meetup November 2020 2020-11-08T19:02:55.359Z
Munich SSC Meetup October 2020 2020-10-06T12:34:57.230Z
Munich SSC Meetup September 2020 2020-08-31T18:28:11.360Z
2nd Munich Online SSC Meetup August 2020 2020-08-12T11:19:38.981Z
Munich Online SSC Meetup August 2020 2020-07-23T18:05:29.171Z
2nd Munich Online SSC Meetup July 2020 2020-07-08T19:21:36.319Z
Munich Online SSC Meetup July 2020 2020-06-26T17:47:29.478Z
Munich Online SSC Meetup June 2020 2020-06-15T17:41:45.634Z
3rd Munich Online SSC Meetup May 2020 2020-05-22T17:22:45.990Z
2nd Munich Online SSC Meetup May 2020 2020-05-12T17:12:16.839Z
Munich Online SSC Meetup May 2020 2020-04-25T01:16:18.926Z
2nd Munich Online SSC Meetup April 2020 2020-04-11T12:44:17.025Z
Munich Online SSC Meetup April 2020 2020-03-25T11:38:11.772Z
Munich SSC Meetup March 2020 2020-02-07T23:30:30.298Z
Welcome to Effective Altruism Munich 2020-01-24T21:31:35.662Z
Munich SSC Meetup February 2020 2020-01-19T20:54:04.199Z
Munich SSC Meetup January 2020 2019-12-17T17:26:18.931Z
Munic SSC Meetup December 2019 2019-12-02T21:41:35.980Z
Munich SSC Meetup 2019-11-12T22:13:04.805Z
Munich SSC Meetup 2019-10-10T18:41:13.058Z

Comments

Comment by pranomostro on [deleted post] 2022-08-05T19:58:07.754Z

The meetup is now cancelled, sorry for the confusion.

Comment by pranomostro on Infra-Bayesianism Distillation: Realizability and Decision Theory · 2022-07-30T12:14:52.809Z · LW · GW

One argument for infra-Bayesianism is that Bayesianism has some disadvantages (like performing badly in nonrealizable settings). The existing examples in this post are decision theory examples: Bayesianism + CDT performing worse than infra-Bayesianism.

(And if Bayesianism + CDT perform badly, why not choose Bayesianism + UDT? Or is the advantage just that infra-Bayesianism is more beautiful?)

Are there non-decision-theory examples that highlight the advantage of infra-Bayesianism over Bayesianism? That is, cases in which infra-Bayesianism leads to "better beliefs" than Bayesianism, without necessarily taking actions? Or is the yardstick for beliefs here that the agent receives higher reward, and everything else is irrelevant?

Comment by pranomostro on Are there English-speaking meetups in Frankfurt/Munich/Zurich? · 2022-06-13T12:38:26.560Z · LW · GW

There used to be an SSC/ACX group in Munich that I organized, maybe shoot a mail to ssc_munich@lists.posteo.de to see if anyone responds? (Might take a day or so since I will have to approve you as a sender).

If anything fails, there is a very lovely EA group in Munich that I participated in :-) Information here.

Comment by pranomostro on Overcoming Akrasia/Procrastination - Volunteers Wanted · 2019-07-15T21:50:11.207Z · LW · GW

I'm interested.

Comment by pranomostro on Recommendation Features on LessWrong · 2019-06-15T10:19:52.496Z · LW · GW

Just some quick feedback on the "Continue Reading" feature: At the moment, when I read a post in the middle of a sequence, the next recommended post is at the beginning of the sequence, but I would like it to be the post after the last one I read in the sequence. Perhaps this is intentional, but I wouldn't use the feature that way, since I already try to read the posts in sequence (sometimes without being logged in).

Comment by pranomostro on Ask LW: Have you read Yudkowsky's AI to Zombie book? · 2019-03-17T14:00:01.758Z · LW · GW

I am currently reading it, currently in the Quantum Physics sequence. I read it all here on LessWrong, I did not buy or read the book version. I sometimes skim through the comments a bit, but sadly, the threads have been unraveled a bit and it is hard to follow a conversation. I don't remember any specific occasion where the comments enlightened me in a new way, though they are sometimes interesting. I doubt it is necessary to read them, though.

Comment by pranomostro on The tech left behind · 2019-03-12T22:01:28.557Z · LW · GW

Plan 9 from Bell Labs comes to my mind (papers & manpages): By the creators of unix, tight integration of networks (better than other systems I have seen so far), UTF-8 all the way down, interesting concept with process-wide inherited namespaces.

It used up way too many weirdness points, though, and was fighting the old Worse is Better fight. It lost, and we are left with ugly and crufty unices today.

Another one that comes to mind is Project Xanadu. It was quite similar to the modern web, but a lot more polished and clean in design and concept. It probably failed because a really late delivery and by being too slow for the hardware at the time.

I guess that's mostly the problem: ambitious projects use up a lot of weirdness points, and then fail to gain enough traction.

A project that will probably fall into the same category is Urbit. If you know a bit of computer science, the whitepaper is just pure delight. After page 20 I completely lost track. It's fallen victim to a weirdness hyperinflation. It looks clean and sane, but I assign ~98% probability that its network is never going to have more than 50.000 users over the span of one month.

Comment by pranomostro on Sages in singularity · 2019-03-02T16:03:28.167Z · LW · GW

You were downvoted, but I think somebody should try to explain exactly why.

Both the idea and the terminology for "sages" is highly questionable. It evokes the idea of mysterious answers and to arguments from authority which are exactly what we would want to avoid.

Could you maybe clarify a bit more what you mean by the word "sage"? It seems like you conflate people who want to solve problems and win with people who want deep insights into the nature of meaning.

Comment by pranomostro on Conditional Independence, and Naive Bayes · 2019-01-19T02:53:53.258Z · LW · GW

Possible (?) typo: "V is "Two arms and ten digits"" could indeed be meant as "V is "Two arms and ten fingers""

Nevermind, didn't know digits was another word for fingers.

Comment by pranomostro on Productivity: Instrumental Rationality · 2018-11-10T16:46:11.554Z · LW · GW

I started tracking my productivity at the beginning of this month, writing a "master plan" in order to know at each moment exactly what I should do next (okay, not "exactly" exactly, but good enough for it to in theory fill more than one day).

I realized how bad it is. Which is excellent.

I'm not sure how much to include in the plan. At the moment it is so big that if I had ultimate self-restraint and would waste not one minute of the day, I would barely get it done. It seems like that's okay, since I have sorted the activities after their priority and I have been improving since I started.

But does anybody have experience in trying out strict/less overwhel plans and observing if (and if yes, how much and in which direction) they influence success?

Comment by pranomostro on 12 Virtues of Rationality posters/icons · 2018-07-22T09:10:54.464Z · LW · GW

Pretty cool. I like the aleph for scholarship.

Two things:

  • It looks like evenness is missing from your post
  • I would represent the void with empty space as well, but add the caption "THE VOID" underneath