Posts

Lars Doucet's Georgism series on Astral Codex Ten 2021-12-04T19:43:00.000Z

Comments

Comment by Sune on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-22T03:07:14.740Z · LW · GW

The alcor-page was not updated since 15th December 2022, where a person who died in August 2022 (as well as later data) was added, so if he was signed up there, we should not expect it too be mentioned yet. For CI latest update was for a patient dying 29th February 2024, but I can’t see any indication of when that post was made.

Comment by Sune on Succession · 2023-12-26T20:32:06.140Z · LW · GW

My point is that potential parents often care about non-existing people: their potential kids. And once they bring these potential kids into existence, those kids might start caring about a next generation. Simularly, some people/minds will want to expand because that is what their company does, or they would like the experience of exploring a new planet/solar system/galaxy or would like the status of being the first to settle there.

Comment by Sune on Succession · 2023-12-26T16:28:57.438Z · LW · GW

Which non-existing person are you refering to?

Comment by Sune on Succession · 2023-12-26T07:31:40.988Z · LW · GW

Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel.

Somehow people keep finding meaning in failling in love and starting a family, even when billions of people have already done that before. We also find meaning in doing careers that are very similar to what million of people have done before or traveling to destination that has been visited by millions of turist. The more similar an activity is to something our ancestors did, the more meaningful it seems.

From the outside, all this looks grabby, but from the inside it feels meaningful.

Comment by Sune on Would you have a baby in 2024? · 2023-12-25T07:47:22.375Z · LW · GW

There has been enough discussion about timelines that it doesn’t make sense to provide evidence about it in a post like this. Most people on this site has already formed views about timelines, and for many, these are much shorter than 30 years. Hopefully, readers of this site are ready to change their views if strong evidence in either direction appears, but I dont think it is fair to expect a post like this to also include evidence about timelines.

Comment by Sune on Succession · 2023-12-22T22:02:22.985Z · LW · GW

There is a huge amount of computation going on in this story and as far as I can tell not even a single experiment. The end hints that there might be some learning from the protagonists experince, at least it is telling it story many times. But I would expect a lot more experimenting, for example with different probe designs and with how much posthumans like different possible negotiated results.

I can see in the story that it make sense not to experiment with posthumans reactions to scenarios, since it might take a long time to send them to the fronter and since it might be possible to simulate them well (its not clear to me if the posthumans are biological). I just wonder if this extreme focus on computation over experiments is a delibrate choice by the author or if it was a blind spot of the author.

Comment by Sune on Succession · 2023-12-21T07:58:14.704Z · LW · GW

An alternative reason for building telescopes would be to recieve updates and more efficient strategies for expanding found after the probe was send out.

Comment by Sune on Pope Francis shares thoughts on responsible AI development · 2023-12-16T09:03:36.848Z · LW · GW

How did this happen?! I guess not by rationalists directly trying to influence the pope? But I’m curious to know the process leading up to this.

Comment by Sune on On Trust · 2023-12-08T12:05:08.241Z · LW · GW

What does respect mean in this case? That is a word I don’t really understand and seems to be a combination of many different concepts being mixed together.

Comment by Sune on On Trust · 2023-12-07T06:29:34.643Z · LW · GW

This is also just another way of saying “willing to be vulnerable” (from my answer below) or maybe “decision to be vulnerable”. Many of these answers are just saying the same thing in different words.

Comment by Sune on On Trust · 2023-12-06T20:00:38.365Z · LW · GW

My favourite definition of trust is “willingness to be vulnerable” and I think this answers most of the questions in the post. For example it explains why trust is a decision that can exist independently from your beliefs: if you think someone is genuinely on your side with probability 95%, you can choose to trust them, by doing something that benefit you in 95% of cases and hurt you on the 5% of cases, or you can decide not to, by taking actions that are better in the 5% of cases. Similar for trusting a statement about the world.

I think this definition comes from psychology, but I also found it useful when talking about trusted third parties in cryptography. Also in this case, we don’t care about the probability that the third part is malicious, what matters is that you are vulnerable if and only if they are malicious.

Comment by Sune on Book Review: 1948 by Benny Morris · 2023-12-04T16:37:10.919Z · LW · GW

whilst the Jews (usually) bought their land fair and square, the owners of the land were very rarely the ones who lived and worked on it.

I have heard this before but never understood what it meant. Did the people who worked the land respect the ownership of the previous owners, for example by paying rest or by being employed by the previous owners, but they just did not respect the sale? Or did the people who worked the land consider themselves to be owners or didn’t have the same concept of ownership as we do today?

Comment by Sune on Sherlockian Abduction Master List · 2023-12-03T10:04:07.663Z · LW · GW

If someone accidentally uses “he” when they meant “she” or vice versa and when talking about a person who’s gender they know, it is likely because the speaker’s first language does not distinguish between he and she. This could be Finnish, Estonian, Hungarian and some Turkic languages and probably also other languages. I haven’t actually use it, but noticed it with a Finnish speaker.

Comment by Sune on 2023 Unofficial LessWrong Census/Survey · 2023-12-02T08:48:02.649Z · LW · GW

The heading of this question is misleading, but I assume I should answer the question and ignore the heading

P(Global catastrophic risk) What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?

Comment by Sune on Queuing theory: Benefits of operating at 60% capacity · 2023-12-01T22:26:00.669Z · LW · GW

You don’t really need the producers to be “idle”, you just have to ensure that if something important shows up, they are ready to work on that. Instead of having idle producers, you can just have them work on lower priority tasks. Has this also been modelled in queueing theory?

Comment by Sune on ChatGPT 4 solved all the gotcha problems I posed that tripped ChatGPT 3.5 · 2023-11-30T08:56:11.901Z · LW · GW

I have a question that tricks GPT-4, but if I post it I’m afraid it’s going to end up in the training data for GPT-5. I might post it once there is a GPT-n that solves it.

Comment by Sune on How can I use AI without increasing AI-risk? · 2023-11-28T10:25:11.192Z · LW · GW

You can use ChatGPT 3.5 for free with chat history turned off. This way your chats should not be used as training data.

Comment by Sune on OpenAI: Facts from a Weekend · 2023-11-21T17:16:45.661Z · LW · GW

The corporate structure of OpenAI was set up as an answer to concerns (about AGI and control over AGIs) which were raised by rationalists. But I don’t think rationalists believed that this structure was a sufficient solution to the problem, anymore than non-rationalists believed it. The rationalists that I have been speaking to were generally mostly sceptical about OpenAI.

Comment by Sune on OpenAI: Facts from a Weekend · 2023-11-21T11:41:24.675Z · LW · GW

They were not loyal to the board, but it is not clear if they were loyal to The Charter since they were not given any concrete evidence of a conflict between Sam and the Charter.

Comment by Sune on [deleted post] 2023-11-20T14:35:59.104Z

I don’t understand how this is a meaningful attitude to your own private economy. But want to donate to someone who needs it more is also a way to spend your money. This would be charity, possibly EA.

Comment by Sune on [deleted post] 2023-11-20T06:16:30.966Z

I have noticed a separate disagreement about what capitalism means, between me and a family member.

I used to think of it as how you handle your private economy. If you are a capitalist, it means that when you have surplus, you save it up and use it (as capital) to improve your future, i.e. you invest it. The main alternative is to be a consumer, who simply spend it all.

My family member sees capitalism as something like big corporations that advertise and make you spend money on things you don’t need. She sees consumerism and capitalism as basically the same thing, while I see them as complete opposites.

Comment by Sune on Sam Altman fired from OpenAI · 2023-11-19T21:18:52.906Z · LW · GW

Ok, looks like he was invited in to OpenAIs office for some reason at least https://twitter.com/sama/status/1726345564059832609

Comment by Sune on Sam Altman fired from OpenAI · 2023-11-19T20:08:03.079Z · LW · GW

It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.

Comment by Sune on Sam Altman fired from OpenAI · 2023-11-18T12:25:16.178Z · LW · GW

It seems this was a surprise to almost everyone even at OpenAI, so I don’t think it is evidence that there isn’t much information flow between LW and OpenAI.

Comment by Sune on Loudly Give Up, Don't Quietly Fade · 2023-11-14T06:45:13.552Z · LW · GW

There seems to be an edit error after “If I just stepped forward privately, I tell the people I”. If this post wasn’t about the bystander effect, I would just have hoped someone else would have pointed it out!

Comment by Sune on How I Think, Part Two: Distrusting Individuals · 2023-11-08T11:00:29.671Z · LW · GW

Corollary: don’t trust yourself!

Comment by Sune on How to (hopefully ethically) make money off of AGI · 2023-11-07T12:25:41.262Z · LW · GW

Most cryptocurrencies have slow transactions. For AI, who think and react much faster than humans the latency would be more of a problem, so I would expect AIs to find a better solution than current cryptocurrencies.

Comment by Sune on Are language models good at making predictions? · 2023-11-06T22:02:48.694Z · LW · GW

I don’t find it intuitive at all. It would be intuitive if you started by telling a story describing the situation and asked the LLM to continue the story, and you then sampled randomly from the continuations and counted how many of the continuations would lead to a positive resolution of the question. This should be well-calibrated, (assuming the details included in the prompt were representative and that there isn’t a bias of which types of ending the stories are in the training data for the LLM). But this is not what is happing. Instead the model outputs a token which is a number, and somehow that number happens to be well-calibrated. I guess that should mean that the prediction make in the training data are well-calibrated? That just seems very unlikely.

Comment by Sune on Deception Chess: Game #1 · 2023-11-03T22:40:12.146Z · LW · GW

Two possible variations of the game that might be worth experimenting with:

  1. Let the adversaries have access to a powerful chess engine. That might make it a better test for what malicious AIs are capable of.
  2. Make the randomisation such that there might not be an honest C. For example, if there is 1/4 chance that no player C is honest, each adversary would still think that one of the other adversaries might be honest, so they would want to gain player A’s trust, and hence end up being helpful. I think the player Cs might improve player A’s chances of winning (compared to no advisors) even when all the adversarial.

I think the variations could work separately, but if you put them together, it would be too easy for the adversaries to agree on a strong-looking but losing move then all players Cs are adversaries.

Comment by Sune on Lying to chess players for alignment · 2023-10-26T05:35:13.632Z · LW · GW

Why select a deterministic game with complete information for this? I suspect games like poker or backgammon would be easier for the adversarial advisors to fool the player and that these games are a better model of the real world scenario.

Comment by Sune on Paper: LLMs trained on “A is B” fail to learn “B is A” · 2023-09-25T17:09:24.426Z · LW · GW

This seems like the kind of research that can have a huge impact on capabilities, and much less and indirect impact on alignment/safety. What is your reason for doing it and publishing it?

Comment by Sune on The Base Rate Times, news through prediction markets · 2023-06-11T18:54:19.589Z · LW · GW

How about “prediction sites”? Although that could include other things like 538. Not sure if you want to exclude them.

Comment by Sune on The Base Rate Times, news through prediction markets · 2023-06-11T18:50:43.923Z · LW · GW

In case you didn’t see the author’s comment below: there is now a patreon button!

Comment by Sune on The Base Rate Times, news through prediction markets · 2023-06-11T18:49:32.843Z · LW · GW

Sorry my last comments wasn’t very constructive. I was also confusing two different critisisms:

  1. that some changes in predicted probabilities are due to the deadline getting closer and you need to make sure not to claim that as news, and
  2. that deadlines are not the in headlines and not always in the graphs either.

About 2): I don’t actually think this is much of a problem, if you ensure that the headline is not misleading and that the information about deadlines is easily available. However if the headline does not contain a deadline, and the deadline is relevant, I would not write any percentages in it. Putin has a 100% chance of dying, just like the rest of us, so it doesn’t make sense to say he has 90% chance of staying in power. In that case, I would prefer the headline to just state the direction the probability is moving in, e.g. “Putin hold on power in Russia is as stable as 3 month ago” or something like that.

To avoid writing “by 2024” in all headlines, maybe you could create subsections of the site by deadline. It would be a good user experince if you could feel like you are scrolling further into the future, starting with predictions for start of 2024, then 2025, then 2030. Of course this requires that there are several predictions for each deadline.

About 1), I think you should only include predictions if they cannot be explained by the nearing deadline.

For some questions this is not a problem at all, e.g. who is going to win an election.

For questions about whether something happens within a given timeframe, the best solution would be if prediction sites started making predictions with constant timeframe (e.g. 1 year) instead of constants deadline. I made a feature request about this to metaculous. They said they liked the idea, but they did not provide any prediction of the probability it would be implemented!

An alternative is to ask for a probability distribution for when something is going to happen. Such questions already exists on metaculous. Then you can see of expected remaining time or if time until median prediction is increasing, or something similar.

For questions with a fixed deadline, if the predicted probability of something happening is increasing, you can still conclude that the event is getting more likely.

For questions with fixed deadline and declining probabilities, it is harder to conclude anything. The very naive prediction would be linear decline, so p(t)/t is constant, where t denote time left and p(t) the probability at that time. E.g. with one month left t=1/12. A slightly less naive solution would be to model the event having constant probability at any time given that is hasn’t already happened. In this case, the constant would be log(1-p(t))/t.

In this model, if the probability is declining faster, meaning that |log(1-p(t))/t| is decreasing, I would stay that is a signal that the probability of the event is getting lower.

If p(t) is declining, but slow enough that |log(1-p(t))/t| is increasing, I would not conclude anything based on that, at least not on metaculus, because people forget to update their predictions sufficiently. I’m not familiar with other prediction sites, maybe it works better when there is money at stakes.

However, this model does not apply for all questions. It would be a useful model for e.g. “will there be a big cyber attack of a given type by a given date” that happens without warning, but for other questions like “Will Putin loose power be a given date”, we might expect some further indication of it happening before it happens, so we would expect the probability to go to 0 faster. For such questions questions, I don’t think you should ever conclude that the underlying event is getting less likely for a single fixed-deadline question.

So to conclude: if predicted probability of some event happing before the deadline is going up, it is fair to report it as the probability going up. If the prediction is going down you should only in rare cases conclude on the. The rare case is if you think the event would happen without much notice and |log(1-p(t))/t| is decreasing.

Comment by Sune on The Base Rate Times, news through prediction markets · 2023-06-08T05:27:57.847Z · LW · GW

I think this is a great project! Have you considered adding a donation button or using Patreon to allow readers to support the project?

I do have one big issue with the current way the information is presented: one of the most important things to take into account when making and interpreting predictions is the timeframe of the question. For example, if your are asking about the probability that Putin losses power, they the probability would likely be twice as high if you consider a 2 year timeframe compared to a 1 year time frame, assuming the probability per month does not change much.

Currently, the first 5 top-level headlines all ignores the timeframe, making the headlines meaningless: “Putin >90% likely to remain in power, no change” “Odds of ceasefire ~15%, recovering from ~12% low” “Russia ~85% likely to gain territory, up from ~78%” “Crimea land bridge ~30% chance of being cut, no change” “Escalation: ~5% risk of NATO clash with Russia, down from ~9%”

The last one is particularly misleading. It compares the probabilities from start April to the probabilities now (start June). But one of the markets have deadline on June 12th and the other prediction have deadline July 1, so it is not surprising that the probability is down!

In order to conclude that the risk is decreasing, the question should have a moving deadline. I’m not aware of any prediction sites that allows questions with a moving timeframe, although it would be a great feature to have.

Comment by Sune on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-30T05:07:31.246Z · LW · GW

Shouldn’t you get notification when there are reactions to your post? At least in the batched notification. The urgency/importance of reactions are somewhere between replies, where you get the notification immediately and karma changed, were the default is that it is batched.

Comment by Sune on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-29T19:33:29.988Z · LW · GW

Can you only react with -1 of a reaction if someone else has already reacted with the +1 version of the reaction?

Comment by Sune on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-29T07:23:12.882Z · LW · GW

Most of the reactions are either positive of negative, but if a comment has several reactions, I find it difficult to see immediately which are positive and which are negative. I’m not sure if this is a disadvantage, because it is slightly harder to get peoples overall valuation of the comment, or if it actually an advantage because you can’t get the pleasure/pain of learning the overall reaction to your comment without first learning the specific reasons for it.

Another issue, if we (as readers of the reactions) tend to group reaction into positive and negative is that it is possible to make several reaction to a comment. It means that if 3 people have left positive reactions, a single person can outweigh that by leaving 3 different negative reaction. A reader would only realise this by hovering over the reactions. I do think it is useful to be able to have more than one reaction, especially in cases where you have both positive and negative feedback, or where one of them is neutral (e.g. “I will repond later”), so I’m not sure if there is a good solution to this.

Comment by Sune on Reacts now enabled on 100% of posts, though still just experimenting · 2023-05-29T07:03:18.659Z · LW · GW

Testing comment. Feel free to react to this however you like, I won’t intrepret the reactions as giving feedback to the comment.

Comment by Sune on $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions · 2023-05-18T06:35:39.509Z · LW · GW

I don't follow the construction. Alice don't know x and S when choosing f. If she is taking the preimage for all 2^n values of x, each with a random S, she will have many overlapping preimages.

Comment by Sune on $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions · 2023-05-17T19:11:06.619Z · LW · GW

I tried and failed to formalize this. Let me sketch the argument, to show where I ran into problems.

Considering a code  with a corresponding decoding function , and assume that   .

For any function  we can define .  We then choose  randomly from the  such functions. We want to code to be such that for random   and random  the information  is enough to deduce , with high probability.  Then each  would give Bob one bit of information about  (its value at the point ) and hence one bit about . Here we use the assumption   to avoid collisions .

Unfortunately, this argument does not work. The issue is that  is chosen at random, instead of as an encoding  of a message. Because of this, we should not expect  to be close to a valid code, so we should not expect there to be a decoding method that will give consistent decodings of  for different values of .

It is not clear to me if this is a bug in the solution or a bug in the problem! The world is not random, so why do we want  to be uniform in ?

Comment by Sune on $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions · 2023-05-17T18:37:15.464Z · LW · GW

This question is non-trivial even for . Here it becomes: let Alice choose a probability  (which has to be on the form  but this is irrelevant for large ) and Bob observes the binomially distributed number . With which distribution should Alice choose  to maximize the capacity of this channel.  

Comment by Sune on An artificially structured argument for expecting AGI ruin · 2023-05-08T18:04:14.912Z · LW · GW

"STEM-level" is a type error: STEM is not a level, it is a domain. Do you mean STEM at highschool-level? At PhD-level? At the level of all of humanity put together but at 100x speed? 

Comment by Sune on A test of your rationality skills · 2023-04-20T05:07:29.157Z · LW · GW

Seems difficult to mark answers to this question.

The type of replies you get, and the skills you are testing, would also depend how long the subject is spending on the test. Did you have a particular time limit in mind?

Comment by Sune on [deleted post] 2023-03-29T17:02:47.620Z

This seems to be a copy of an existing one month old post: https://www.lesswrong.com/posts/CvfZrrEokjCu3XHXp/ai-practical-advice-for-the-worried

Comment by Sune on What do you think is wrong with rationalist culture? · 2023-03-10T21:41:57.353Z · LW · GW

What are you comparing to? It is only compared to what you would want rationalist culture to be like, or do you have examples of other cultures (besides academia) that do better in this regard?

Comment by Sune on Petition - Unplug The Evil AI Right Now · 2023-02-15T20:22:12.836Z · LW · GW

I mostly agree and have strongly upvoted. However, I have one small but important nitpick about this sentense:

The risks of imminent harmful action by Sydney are negligible.

I think when it comes to x-risk, the correct question is not “what is the probability that this will result in existential catastrophe”. Suppose that there is a series of potential harmful any increasingly risky AIs that each have some probabilities of causing existiential catastrophe unless you press a stop button. If the probabilities are growing sufficiently slowly, then existential catastrophe will most likely happen for an n where is still low. A better question to ask is “what was the probability of existential catastrophe happening for som .”

Comment by Sune on On not getting contaminated by the wrong obesity ideas · 2023-01-28T23:22:52.555Z · LW · GW

Im confused by the first two diagrams in the section called: “There wasn’t an abrupt shift in obesity rates in the late 20th century”. As far as I understand, they contain data about the distribution of BMI for black female and white males at age 50 and born up until 1986. If so, it would contain data from 2036.

Comment by Sune on [Cross-post] Is the Fermi Paradox due to the Flaw of Averages? · 2023-01-22T09:54:49.223Z · LW · GW

Now I see, yes you are right. If you want the beliefs to be accurate at the civilisation level, that is the correct way of looking at it. This corresponds to the 1/3 conclusion in the sleeping beauty problem.

I was thinking of it on the universe level, were we are a way for the universe to understand itself. If we want the universe to form accurate beliefs about itself, then we should not include our own civilisation when counting the number of the civilisations in the galaxy. However, when deciding if we should be surprised that we don’t see other civilisations, you are right that should include ourselves in the statistics.

Comment by Sune on [Cross-post] Is the Fermi Paradox due to the Flaw of Averages? · 2023-01-21T08:17:49.754Z · LW · GW

Yes, the ordering does matter. Compare two hypotheses, one, , says that one average there will be 1 civilisation in each galaxy. The other, says that on average there will be civilisations in each galaxy. Suppose the second hypothesis is true.

If you now do the experiment of choosing a random galaxy, and counting the number of civilisations in that galaxy, you will probably not find any civilisation, which correctly supports .

If you do the second experiment of first finding yourself in some galaxy and then counting the number of civilisation in the galaxy, you will at least find your own civilisation and probably not any other. If you don’t correct for the fact that this experiment is different, you would update strongly in the direction of even when is true and the evidence of the experiment is as favourably towards as possible. This cannot be the correct reasoning, since correct reasoning should not consistently lead you to wrong conclusions.

You might argue that there is another possible state which is even more favourable evidence towards : that you do not exist. However, in a universe with galaxies, the probability of this is .