Posts

A very strange probability paradox 2024-11-22T14:01:36.587Z
What are some good ways to form opinions on controversial subjects in the current and upcoming era? 2024-10-27T14:33:53.960Z
Isomorphisms don't preserve subjective experience... right? 2024-07-03T14:22:59.679Z
notfnofn's Shortform 2024-06-11T12:07:21.911Z
Turning latexed notes into blog posts 2024-06-01T18:03:18.039Z
Quantized vs. continuous nature of qualia 2024-05-15T12:52:07.633Z
CDT vs. EDT on Deterrence 2024-02-24T15:41:03.757Z

Comments

Comment by notfnofn on Everywhere I Look, I See Kat Woods · 2025-01-17T21:24:00.823Z · LW · GW

Would bet on this sort of strategy working; hard agree that ends don't justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)

Comment by notfnofn on Numberwang: LLMs Doing Autonomous Research, and a Call for Input · 2025-01-16T20:53:21.969Z · LW · GW

I volunteer myself as a test subject; dm if interested

Comment by notfnofn on Everywhere I Look, I See Kat Woods · 2025-01-16T03:11:51.965Z · LW · GW

So I'm new here and this website is great because it doesn't have bite-sized oversimplifying propaganda. But isn't that common everywhere else? Those posts seem very typical for reddit and at least they're not outright misinformation.

Also I... don't hate these memes. They strike me as decent quality. Memes aren't supposed to make you think deeply about things.

Edit: searched Kat Woods here and now feel worse about those posts

Comment by notfnofn on How do you deal w/ Super Stimuli? · 2025-01-14T16:02:27.407Z · LW · GW

There have been a lot of tricks I've used over the years, some of which I'm still using now, but many of which require some level of discipline. One requires basically none, has a huge upside (to me), and has been trivial for me to maintain for years: a "newsfeed eradicator" extension. I've never had the temptation to turn it off unless it really messes with the functionality of a website. 

It basically turns off the "front page" of whatever website you apply it to (e.g. reddit/twitter/youtube/facebook) so that you don't see anything when you enter the site and have to actually search for whatever you're interested in. And for youtube, you never see suggestions to the right of or at the end of a video.

Comment by notfnofn on You are too dumb to understand insurance · 2025-01-10T17:41:19.105Z · LW · GW

I think even the scaling thing doesn't apply here because they're not insuring bigger trips: they're insuring more trips (which makes things strictly better). I'm having some trouble understanding Dennis' point.

Comment by notfnofn on You are too dumb to understand insurance · 2025-01-10T00:45:24.383Z · LW · GW

"I don't know, I recall something called the Kelly criterion which says you shouldn't scale your willingness to make risky bets proportionally with available capital - that is, you shouldn't be just as eager to bet your capital away when you have a lot as when you have very little, or you'll go into the red much faster.

I think I'm misunderstanding something here. Let's say you have  dollars and are looking for the optimum number of dollars to bet on something that causes you to gain  dollars with probability  and lose  dollars with probability . The optimum number of dollars you should bet via the Kelly criterion seems to be

(assuming positive expectation; i.e. the numerator is positive), which does scale linearly with . And this seems fundamental to this post.

Comment by notfnofn on notfnofn's Shortform · 2025-01-03T13:47:41.393Z · LW · GW

(Epistemic status: low and interested in disagreements)

My economic expectations for the next ten years are something like:

  • Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.

  • Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it is difficult to falsify this. Regulations take a long time to cut, causing some jobs to remain far beyond their usefulness. Humans continue to get very offended if they find out they are talking to an AI in business matters.

  • Money remains a thing for the next decade and enough people have jobs to avoid a completely alien economy. There is time to slowly transition to UBI and distribution of prosperity, but there is no guarantee this occurs.

Comment by notfnofn on RESCHEDULED Lighthaven Sequences Reading Group #16 (Saturday 12/28) · 2025-01-01T22:54:56.538Z · LW · GW

Ah, darn. Are there any other events/meetups you know of at Lighthaven during those weeks?

Comment by notfnofn on RESCHEDULED Lighthaven Sequences Reading Group #16 (Saturday 12/28) · 2025-01-01T12:35:12.222Z · LW · GW

Is this going to continue in 2025? I'll be visiting Berkeley from Jan 5th to Jan 17th and would like to come visit.

Comment by notfnofn on shortplav · 2024-12-29T13:34:36.726Z · LW · GW

https://www.lesswrong.com/posts/SHq7wKA8iMqG5QfjC/notfnofn-s-shortform?commentId=JHjHJzE9wCLe2ANPG

Here's a little quick take of mine that provides a setting where centaur > AI (maybe). It's theory of computation which is close to complexity theory

Comment by notfnofn on No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate! · 2024-12-28T20:36:11.057Z · LW · GW

That's incredible.

But how do they profit? They say they don't profit on middle eastern war markets, so they must be profiting elsewhere somehow

Comment by notfnofn on No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate! · 2024-12-28T17:17:51.286Z · LW · GW

There are also gas fees which dramatize this effect, but this is a very important point. A prediction market price gives rise to a function from interest rates to probability ranges for which a rational investor would not bet on the market if they had a probability in that range. The larger the interest rate or the farther out the market, the bigger the range.

Probably an easy widget to make: something that takes as input the polymarket price, gas fees, and interest rate and spits out this range or probabilities.

Comment by notfnofn on AlphaAndOmega's Shortform · 2024-12-28T15:07:05.809Z · LW · GW

corank has to be more than 1, not equal to 1. I'm not sure if such a matrix exists; the reason I was able to change its mind by supplying a corank-1 matrix was that its kernel behaved in a way that significantly violated its intuition.

Comment by notfnofn on AlphaAndOmega's Shortform · 2024-12-28T13:25:59.271Z · LW · GW

I similarly felt in the past that by the time computers were pareto-better than I at math, there would already be mass-layoffs. I no longer believe this to be the case at all, and have been thinking about how I should orient myself in the future. I was very fortunate to land an offer for an applied-math research job in the next few months, but my plan is to devote a lot more energy to networking + building people skills while I'm there instead of just hyperfocusing on learning the relevant fields.

o1 (standard, not pro) is still not the best at math reasoning though. I occasionally give it linear algebra lemmas that I suspect it to be able to help with, but it always has major errors. Here are some examples:

  • I have a finite-dimensional real vector space equipped with a symmetric bilinear form which is not necessarily non-degenerate. Let be the dimension of , be the subspace of with , and be the dimension of . Let and be dimensional real vector spaces that contain and are equipped with symmetric non-degenerate bilinear forms that extend . Show that there exists an isometry from to that restricts to the identity on . To its credit, it gave me some references that helped me prove this, but its argument was completely bogus.

  • Let be a real finite-dimensinoal vector space equipped with a symmetric non-degenerate bilinear form and let be an isometry of . Prove or disprove that the restriction of to the fixed-point subspace of on is non-degenerate. (Here it sort of had the right idea but its counter-examples were never right).

  • Does there exist a symmetric irreducible square matrix with diagonal entries and non-positive integer off-diagonal entries such that the corank is more than ? Here it gave a completely wrong proof of "no" and, no matter how many times I corrected its errors, kept gaslighting me into believing that the general idea must work and that it's a standard result in the field that it follows from a book that I happened to actually have read. It kept insisting this, no matter how many times I corrected its errors, until I presented with an example of a corank-1 matrix that made it clear that its idea was unfixable.

I have a strong suspicion that o3 will be much better than o1 though.

Comment by notfnofn on Letter from an Alien Mind · 2024-12-27T17:48:06.923Z · LW · GW

My decision to avoid satellite view is a relic from a time of conserving data (and even then it might have been a case of using salt to accelerate cooking time). I wonder if there's a risk of using it in places where cellular data is spotty, though. I'd imagine that using satellite view would reduce the efficiency in which the application saves local map information that might be important if I make a wrong turn where there's no data available.

Comment by notfnofn on When Is Insurance Worth It? · 2024-12-24T21:08:49.495Z · LW · GW

From the original post:

The purpose if insurance is not to help us pay for things that we literally do not have enough money to pay for. It does help in that situation, but the purpose of insurance is much broader than that. What insurance does is help us avoid large drawndowns on our accumulated wealth, in order for our wealth to gather compound interest faster.

Think about that. Even though insurance is an expected loss, it helps us earn more money in the long run. This comes back to the Kelly criterion, which teaches us that the compounding effects on wealth can make it worth paying a little up front to avoid a potential large loss later.

Click the link for a more in-depth explanation

Comment by notfnofn on johnswentworth's Shortform · 2024-12-24T16:55:25.686Z · LW · GW
Comment by notfnofn on Why is neuron count of human brain relevant to AI timelines? · 2024-12-24T12:04:20.075Z · LW · GW

If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution.

Just to make sure I fully understand your argument, is this paraphrase correct?

 

"Suppose we have the compute theoretically required to simulate the human brain down to an adequate granularity for obtaining its intelligence (which might be at the level of cells instead of, say, the atomic level). Even so, one has to consider the compute required to actually build such a simulation, which could be much larger as the human brain was built by the full universe."

 

(My personal view is that the opposite direction is true: it seems with recent evidence that we can pareto-exceed human intelligence while being very far from the compute required to simulate a brain. An idea I've seen floating around here is that natural selection built our brain randomly with a reward function that valued producing offspring so there is a lot of architecture that is irrelevant to intelligence)

Comment by notfnofn on LessWrong's (first) album: I Have Been A Good Bing · 2024-12-22T23:19:57.218Z · LW · GW

Spotify recommended first recommended her to me in September 2023 and later that September I came across r/slatestarcodex, which was my first exposure to the rationalist community. That's kind of funny.

Comment by notfnofn on LessWrong's (first) album: I Have Been A Good Bing · 2024-12-22T21:46:42.966Z · LW · GW

Huh. Vienna Teng was my top artist, too and this is the only other spotify wrapped I've seen here. Is she popular in these circles?

Comment by notfnofn on Kaj's shortform feed · 2024-12-22T18:40:41.981Z · LW · GW

Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It's wild that the reverse is plausible now

Comment by notfnofn on When Is Insurance Worth It? · 2024-12-19T19:44:19.494Z · LW · GW

Annoying anecdote: I interviewed for an entry-level actuarial position recently and, when asked about the purpose of insurance, I responded with essentially the above argument (along the lines of increasing everyone's log expectation, with kelly betting as a motivation). The reply I got was "that's overcomplicated; the purpose of insurance is to let people avoid risk".

By the way, I agree strongly with this post and have been trying to make my insurance decisions based on this philosophy over the past year.

Comment by notfnofn on What conclusions can be drawn from a single observation about wealth in tennis? · 2024-12-18T11:41:08.673Z · LW · GW

Some ideas discussed here + in comments

https://www.astralcodexten.com/p/secrets-of-the-great-families

Comment by notfnofn on notfnofn's Shortform · 2024-12-18T01:49:56.557Z · LW · GW

Oops, yeah the written programs are supposed to be deterministic. The point of mentioning the RNG was to handle the fact that an AI might derive its performance from a strong random number generator, which a C code can't emulate.

Comment by notfnofn on notfnofn's Shortform · 2024-12-17T14:12:46.905Z · LW · GW

To clarify: we are not running any programs, just providing code. In a sense, we are competing at the task of providing descriptions for very large numbers with an upper bound on the size of the description (and the requirement that the description is computable).

Comment by notfnofn on Nathan Young's Shortform · 2024-12-16T17:45:04.778Z · LW · GW

I personally used beeminder for this (which I think originated from this community)

Comment by notfnofn on notfnofn's Shortform · 2024-12-16T14:02:57.064Z · LW · GW

Little thought experiment with flavors of Newcomb and Berry's Paradox:

I have the code of an ASI in front of me, translated into C along with an oracle to a high-quality RNG. This code is N characters. I want to compete with this ASI at the task of writing a 2N-character (edit: deterministic) C code that halts and prints a very large integer. Will I always win?

Sketch of why: I can write my C code to simulate the action of the ASI on a prompt like "write a 2N-character C code that halts and prints the largest integer" using every combination of possible RNG calls and print the max + 1 or something.

Sketch of why not: The ASI can make us both lose by "intending" to print a non-halting program if it is asked to. There might be probabilistic approaches for the ASI as well, where it produces a non-halting program with some chance. If I can detect this in the simulations, I might be able to work around this and still beat the ASI.

Comment by notfnofn on [deleted post] 2024-12-13T18:59:39.912Z

Quick note: might be easier to replace your utility function as for some parameter (which is equivalent to the one you have, after rescaling and shifting). Utility functions should be convex but this is very convex, being bounded above.

Utility functions are discussed a lot here; I think it's worth poking around a bit.

Comment by notfnofn on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-12T12:44:48.970Z · LW · GW

I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson's paper first; I've read many of his other papers and blogs, including on the topic of free will, and really enjoy them).

Comment by notfnofn on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-11T18:36:21.429Z · LW · GW

I'm going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle's Wall, Johnston's popcorn, and Putnam's rock), and when that's eventually done I might report back here or make a new post if this thread is long dead by then

Comment by notfnofn on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-11T14:52:25.442Z · LW · GW

Okay, let me know if this is a fair assessment:

  1. Let's consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they're meditating in a way that we can agree they're having a conscious experience. Let's pick a 1 second window and consider the CNS and local environment of the meditator during that window.

  2. (I don't know much physics, so this might need adjustment): Let's say we had a reasonable guess of an "initial wavefunction" of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.

  3. Now let's look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is "simple". But I can't convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we've designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).

  4. Let's say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.

  5. Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there's a sense in which the sand swirling sequence is really emulating the meditator's conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)

What I can't seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally "human-friendly". I don't know how this can be replaced with a way that avoids needing a sentient being.

Comment by notfnofn on My thoughts on correlation and causation · 2024-12-11T13:53:07.740Z · LW · GW

Based on your previous posts (and other posts like this), I suspect this might not get any comments explaining the downvotes. So I'll explain the reason for my downvote, which you may find helpful:

I don't see any ideas. You start with a really weird, hard-to-read, and I think wrong definition of a Cartesian product, but then never mention cartesian products again. You then don't define a relation, but I'm guessing that you meant a relation to be a subset of V x V. But then your definition of dependency doesn't make sense. We usually discuss dependency over things called "random variables" (which are not actual variables in the sense that you're using them), and it's hard to find a charitable interpretation of what you could possibly mean in a way that makes sense.

The next section is a bunch of vague ramblings that make no effort to be coherent. How does a relation express a law of physics? What are the variables? How the heck are you getting boundary conditions into a relation? These are not rhetorical questions: I was trying to find a charitable interpretation to make any of these concepts make sense but I couldn't.

For future posts I think you should:

  1. Take the time to properly understand the concepts you want to talk about. I don't think you know the formal definition of what it means for a random variable X to depend on a random variable Y, and I suspect you might not even know what a random variable is.

  2. Properly flesh out your ideas instead of bringing up a bunch of vague concepts and hoping the reader can flesh them out for you. Definitely don't publish anything that has things like "what are consequences???" in there: if you're creating a theory you should obviously be the one to define everything.

  3. Make sure each line actually follows from the line before it. You can use an AI to help you here: give it your draft and ask if every line convincingly follows from the prior assumptions.

Comment by notfnofn on Stupid Question: Why am I getting consistently downvoted? · 2024-12-10T12:54:45.943Z · LW · GW

I've had reddit redirect here for about almost a year now (with some slip ups here and there). It's been fantastic for my mental health.

Comment by notfnofn on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T18:07:31.188Z · LW · GW

Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.

I think there are two separate matters:

  1. I have a physical object that has a means to receive inputs and will do something based on those inputs. Suppose I now create two machines: one that takes 0s and 1s and converts it into something the object receives, and one that observes the actions of the physical object then spits out an output. Both of these machines operate in time that is simultaneously at most quadratic in the length of the input AND at most linear in the "run time" of the physical object. And both of these machines are "bijective".

If I create a program that has the same input/outputs as the above configuration (which is highly non-unique, and can vary significantly based on the choice of machines), there is some sense in which the physical object "computes" this program. This is kind of weak since the input/output converting machines can do a lot to emulate different programs, but at least you're getting things in a similar complexity class.

  1. You have a central nervous system (CNS) which is currently having a "subjective experience", whatever that means. It is true that your CNS can be viewed as the aforementioned physical object. And while it is also true that, in the previous framework, one would need a very long and complicated program, it also seems to be true that my subjective experience arises from just a specific sequence of inputs.

If we were to only consider how the physical object behaves with a few specific inputs, I think it's difficult to eliminate any possibilities for what the object is computing. When I see thought experiments like Putnam's rock, they make sense to me because we're only looking at a specific computation, not a full input-output set.

Edit: @Davidmanheim I've read your reply and agree that I've slightly misinterpreted your post. I'll think about if the above ideas can be salvaged from the angle of measuring information in a long but finite sequence (e.g. Kolmogorov complexity) and reply when I have time.

Comment by notfnofn on Algebraic Linguistics · 2024-12-08T15:17:25.813Z · LW · GW

In general, it feels like the alphabet can be partitioned into "sections" where you can use other letters in the same section for additional variables that will play similar roles. Something like:

[a,b,c,d]; [f,g,h]; [i,j,k]; [m,n]; [p,q]; [r,s,t]; [u,v,w]; [x,y,z]

Sometimes these can be combined: [m,n,p,q]; [p,q,r,s,t]; [r,s,t,u,v,w]; [u,v,w,x,y,z]

Comment by notfnofn on Open Thread Fall 2024 · 2024-12-05T18:43:12.974Z · LW · GW

Is there a way to for me to prove that I'm a human on this website before technology makes this task even more difficult?

Comment by notfnofn on Launching Applications for the Global AI Safety Fellowship 2025! · 2024-12-03T20:57:22.979Z · LW · GW

Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.

Comment by notfnofn on Why does ChatGPT throw an error when outputting "David Mayer"? · 2024-12-01T14:24:46.306Z · LW · GW

I don't think this explanation makes sense. I asked ChatGPT "Can you tell me things about Akhmed Chatayev", and it had no problem using his actual name over and over. I asked about his aliases and it said

Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was "Akhmed Shishani," with "Shishani" translating to "Chechen," indicating his ethnic origin. Wikipedia

Additionally, Chatayev adopted the alias "David

Then threw an error message. Edit: upon refresh it said more:

Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was "Akhmed Shishani," with "Shishani" translating to "Chechen," indicating his ethnic origin. Wikipedia

Additionally, Chatayev adopted the alias "David Mayer." This particular alias led to a notable case of mistaken identity involving a 90-year-old U.S. Army veteran and theater historian named David Mayer. The veteran experienced significant disruptions, such as difficulties in traveling and receiving mail, due to his name being on a U.S. security list associated with Chatayev's alias. CBC

These aliases facilitated Chatayev

(I didn't stop copying there; that was the end of the answer. Full chat)

Comment by notfnofn on A Meritocracy of Taste · 2024-12-01T14:16:42.034Z · LW · GW

I think their metric might be click and not upvote (or at least, clicking has a heavy weight). Are you more likely to click on a video that pushes an argument you oppose?

As a quick test, you can launch a vpn and open private browsing to see how your recommendations change after a few videos

Comment by notfnofn on Launching Applications for the Global AI Safety Fellowship 2025! · 2024-11-30T17:38:56.717Z · LW · GW

I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?

Comment by notfnofn on A very strange probability paradox · 2024-11-28T14:45:07.035Z · LW · GW

Jumping in here: the whole point of the paragraph right after defining "A" and "B" was to ensure we were all on the same page. I also don't understand what you mean by:

Most ordinary people will assume it means that all the rolls were even

and much else of what you've written. I tell you I will roll a die until I get two 6s and let you know how many odds I rolled in the process. I then do so secretly and tell you there were 0 odds. All rolls are even. You can now make a probability distribution on the number of rolls I made, and compute its expectation.

Comment by notfnofn on Two flavors of computational functionalism · 2024-11-26T17:36:20.145Z · LW · GW

I recently came across unsupervised machine translation here. It's not directly applicable, but it opens the possibility that, given enough information about "something", you can pin down what it's encoding in your own language.

So let's say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I'm having some trouble formulating this precisely.

Comment by notfnofn on Open Thread Fall 2024 · 2024-11-25T18:09:51.095Z · LW · GW

Possible bug report: today I've been seeing errors of the form

Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?

that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.

Comment by notfnofn on A very strange probability paradox · 2024-11-24T11:19:24.400Z · LW · GW

There is an important nuance that makes it ~n+4/5 for large n (instead of n+1), but I'd have to think a bit to remember what it was and give a nice little explanation. If you can decipher this comment thread, it's somewhat explained there: https://old.reddit.com/r/mathriddles/comments/17kuong/you_roll_a_die_until_you_get_n_1s_in_a_row/k7edj6l/

Comment by notfnofn on A very strange probability paradox · 2024-11-24T10:02:50.167Z · LW · GW

You have very strong intuition. A puzzle I was giving people before was "Round E[number of rolls until 100 6s in a row | all even] to the next integer" and the proof I had in mind for 101 was very close to your second paragraph. And when I friend of mine missed the "in a row" part and got 150, the resolution we came to (after many hours!) was similar to the rest of the argument you gave.

Comment by notfnofn on A very strange probability paradox · 2024-11-22T14:01:59.014Z · LW · GW

By the way, this is my first non-question post on lesswrong after lurking for almost a year. I've made some guesses about the mathematical maturity I can assume, but I'd appreciate feedback on if I should assume more or less in future posts (and general feedback about the writing style, either here or in private).

Comment by notfnofn on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-20T20:18:37.361Z · LW · GW

You have the tools necessary to figure this out

Comment by notfnofn on [deleted post] 2024-11-20T02:02:24.447Z

I think it would be a good idea to just start applying now to get a feel for how hirable you specifically are and what you specifically could be doing now to very quickly bounce if you get let go. You might be more valuable than you think

Comment by notfnofn on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T02:27:01.553Z · LW · GW

 has come up from time to time for me

Comment by notfnofn on notfnofn's Shortform · 2024-11-17T19:48:45.926Z · LW · GW

source seems genuine: https://old.reddit.com/r/artificial/comments/1gq4acr/gemini_told_my_brother_to_die_threatening/lwv84fr/?context=3 but I'm less sure now