Posts

Learning Written Hindi From Scratch 2024-04-11T11:13:17.743Z
David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud. 2024-01-27T13:21:05.068Z
Wobbly Table Theorem in Practice 2023-09-28T14:33:16.898Z
“Thinking Physics” as an applied rationality exercise 2023-08-27T15:31:00.814Z
“Thinking Physics” as an applied rationality exercise 2023-08-10T08:32:01.075Z
Karlsruhe Rationality Meetup: Inadequate Equilibria pt2 2022-11-10T10:00:44.150Z
Karlsruhe Rationality Meetup: Inadequate Equilibria pt1 2022-11-02T09:15:57.489Z
What Is the Idea Behind (Un-)Supervised Learning and Reinforcement Learning? 2022-09-30T16:48:06.523Z
Does the existence of shared human values imply alignment is "easy"? 2022-09-26T18:01:10.661Z
Karlsruhe Rationality Meetup: Predictions 2022-09-06T16:56:57.021Z
Moneypumping Bryan Caplan's Belief in Free Will 2022-07-16T00:46:03.176Z
Returns on cognition of different board games 2022-02-13T20:40:49.163Z
Coping with Undecidability 2022-01-27T10:31:00.520Z
Time until graduation as a proxy for picking between (German) universities 2022-01-24T18:27:32.984Z
Are "non-computable functions" always hard to solve in practice? 2021-12-20T16:32:25.118Z
What is the evidence on the Church-Turing Thesis? 2021-09-19T11:34:49.377Z
Chance that "AI safety basically [doesn't need] to be solved, we’ll just solve it by default unless we’re completely completely careless" 2020-12-08T21:08:47.575Z
Morpheus's Shortform 2020-08-07T22:35:57.530Z

Comments

Comment by Morpheus on Job Search Advice · 2024-04-23T10:43:24.811Z · LW · GW

A piece of advice I frequently hear: always make sure you call somebody in the company you're applying for.

Is this still up-to-date advice? Or is messaging someone over LinkedIn or similar more appropriate? Mostly asking because I got the impression that the internet changed the norms to no one doing phone calls anymore.

Comment by Morpheus on A couple productivity tips for overthinkers · 2024-04-23T08:41:03.397Z · LW · GW
  1. If you find that you’re reluctant to delete computer files / emails, don’t empty the trash

In Gmail I like to scan the email headers and then I bulk select and archive them (* a e thanks to vim shortcuts). After 5 years of doing this I still didn't run out of the free storage in Gmail. I already let Gmail sort the emails by "Primary" , "Promotions" , "Updates" etc. Usually the only important things are in "Primary" and 1 or 2 in "Updates".

Comment by Morpheus on Morpheus's Shortform · 2024-04-20T12:03:19.745Z · LW · GW

Can anyone here recommend particular tools to practice grammar? Or with strong opinions on the best workflow/tool to correct grammar on the fly? I already know Grammarly and LanguageTool, but Grammarly seems steep at $30 per month when I don’t know if it is any good. I have tried GPT-4 before, but the main problems I have there, is that it is too slow and changes my sentences more than I would like (I tried to make it do that less through prompting, which did not help that much).

I notice that feeling unconfident about my grammar/punctuation leads me to write less online, especially applying for jobs or fellowships, feels more icky because of it. That seems like an avoidable failure mode.

Ideally, I would like something like the German Orthografietrainer (It was created to teach middle and high school children spelling and grammar). It teaches you on a sentence by sentence basis where to put the commas and why by explaining the sentence structure (Illustrated through additional examples). Because it trains you with particularly tricky sentences, the training is effective, and I rapidly got better at punctuation than my parents within ~3 hours. Is there a similar tool for English that I have never heard of?

While writing this, I noticed that I did not have the free version of Grammarly enabled anymore and tried the free version while writing this. One trick I noticed is that it lists what kinds of error you are making across the whole text. So it is easy to infer what particular mistake I made in which spot, and then I correct it myself. Also, Grammarly did not catch a few simple spelling and punctuation mistakes that Grammarly caught (like “anymore” or the comma at the start of this sentence.). At the end, I also tried ProWritingAid, which found additional issues.

Comment by Morpheus on Is LLM Translation Without Rosetta Stone possible? · 2024-04-11T09:27:36.672Z · LW · GW

Trying to learn a language from scratch, just from text is a fun exercise for humans also. I recently tried this with Hindi after I had an disagreement with someone about the exact question of this post. I didn't get very far in 2 hours though.

Comment by Morpheus on Quinn's Shortform · 2024-04-06T20:50:58.135Z · LW · GW

Trydactyl is amazing. You can disable the mode on specific websites by running the blacklistadd command. If you have configured that already, these settings can also be saved in your config file. Here's my config (though careful before copying my config. It has fixamo_quiet enabled, a command that got Tridactyl almost removed when it was enabled by default. You should read what it does before you enable it.)

Here are my ignore settings:

autocmd DocStart https://youtube.com mode ignore
autocmd DocStart https://todoist.com mode ignore
autocmd DocStart mail.google.com mode ignore
autocmd DocStart calendar.google.com mode ignore
autocmd DocStart keyma.sh mode ignore
autocmd DocStart monkeytype.com mode ignore
autocmd DocStart https://www.youtube.com mode ignore
autocmd DocStart https://ilias.studium.kit.edu/ mode ignore
autocmd DocStart localhost:888 mode ignore
autocmd DocStart getguestimate.com mode ignore
autocmd DocStart localhost:8888 mode ignore
Comment by Morpheus on The Best Tacit Knowledge Videos on Every Subject · 2024-03-31T18:50:20.864Z · LW · GW

Juggling: Anthony Gatto's juggling routine from 2000. Anthony Gatto holds several juggling world records. This routine is infamous in the juggling world (here's a decent juggler commenting on it). As well as the fact that he gave up juggling to work with concrete instead (because it pays the bills). Here's more context on Gatto and his routine (the guy picking up the balls for him in the video is his father, for example):

Comment by Morpheus on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-23T21:05:18.419Z · LW · GW

Agreed. Especially the “electoral college is good actually” part is where I started laughing. If you don't want tyranny by the majority, perhaps just not crippling your system by not using first-past-the-post voting would be a first step to a more sane system.

Comment by Morpheus on On green · 2024-03-23T20:15:20.541Z · LW · GW

Absolutely love this essay! The green from the perspective of non-green thoughts really resonated with things I thought in the past and made me notice how I have been confused by green. Helpfull for AGI or not, this is is giving me a bunch of fresh thoughts about problems/confusing areas in my own life, so thanks!

Comment by Morpheus on Natural Latents: The Concepts · 2024-03-21T22:44:36.185Z · LW · GW

A quick intuitive check for whether something is a natural latent over some parts of a system consists of two questions:

  • Are the parts (approximately) independent given the candidate natural latent?

I first had some trouble checking this condition intuitively. I might still not have got it correctly. I think one of the main things that got me confused first, is that if I want to reason about natural latents for “a” dog, I need to think about a group of dogs. Even though there are also natural latents for the individual dog (like fur color is a natural latent across the dog's fur). Say I check the independence condition for a set of sets of either cats or dogs. So if I look at a single animal's shoulder height in those sorted cluster, it tells me which of the two clusters it's in, but once I updated on that information, my guesses for the dog height's will not be able to improve.

An important example for something that is not a natural latent is the empirical mean in fat tailed distributions for real world sample sizes, while it is in thin-tailed ones. This doesn't mean that they don't have natural latents. This fact is what Nassim Taleb is harping on. For Pareto distributions (think: pandemics, earthquakes, wealth), one still has natural latents like the tail index (estimated from plotting the data on a log-log plot by dilettantes like me and more sophisticatedly by real professionals).

Comment by Morpheus on Morpheus's Shortform · 2024-03-21T14:27:36.564Z · LW · GW

If I had more time I would have written a shorter letter.

TLDR: I looked into how much it would take to fine-tune gpt-4 to do Fermi estimates better. If you liked the post/paper on fine-tuning Language models to make predictions you might like reading this. I evaluated gpt-4 on the first dataset I found, but gpt-4 was already making better fermi estimates than the examples in the dataset, so I stopped there (my code).

First problem I encountered: there is no public access to fine-tuning gpt-4 so far. Ok, we might as well just do gpt-3.5 I guess.

First, I found this Fermi estimate dataset. (While doing this, I was thinking I should perhaps search more widely what kind of different AI benchmarks exist, since probably a dataset that is evaluating a similar capability is already out there, but I don't know its name.)

Next I looked at this paper, where people used among other gpt-3.5 and gpt-4 on this benchmark. Clearly these people weren't even trying, though, because gpt-4 does worse than gpt-3.5. One of the main issues I saw was that they were trying to make the LLM output the answer as a program in the domain specific language used in that dataset. They couldn't even get the LLM to output valid programs more than 60% of the time (their metric compares on a log scale, if the answer by the LLM is within 3 orders of magnitude of the real answer. 1 is best 0 is more than 3 orders of magnitude away: fp-score(x) = max(0,1-1/3 * | log_10(prediction/answer)|)).

image

My conjecture was that just using python instead should give you better results.(This turned out to be true). I get a mean score of ~0.57 on 100 sample problems, so as good results with gpt-4-turbo as they get when they first provide “context” by giving the llm the values for the key variables needed to compute the answer (why would this task even still be hard at all?).

When gpt-4 turned out to get a worse fp-score than gpt-4-turbo on my 10 samples. I got suspicious and after looking at samples gpt-4 got a bad score, it was clear this was mostly to blame on bad quality of the dataset. 2 answers were flat-out not using the correct variables/confused, while gpt-4 was answering correctly. Once, the question didn't make clear what unit to use. 2 of the samples gpt-4 gave a better answer. Once, using a better approach (using geometry instead of wrong figures of how much energy the earth gets from the sun, to determine the fraction of sun energy that the earth receives). Once, by having better numbers, input estimates like how many car miles are driven in total in the US.

So on this dataset, gpt-4 seems to be already at the point of data-saturation. I was actually quite impressed how well it was doing. When I had tried using gpt-4 for this task, I had always felt like it was doing quite badly. One guess I have is this is because when I ask gpt-4 for an estimate, it is often a practical question, which is actually harder than these artificial questions. In addition, the reason I ask gpt-4 is that the question is hard, and I expect to need to employ a lot of cognitive labor to do it myself.

Another data point with respect to this was the “Thinking physics exercises”. Which I tried with some of my friends. For that task, gpt-4 was better than people who were bad at this, but worse than people who were good at this (and given 5–10 minutes of thinking time) (although I did not rigorously evaluate that). GPT-4 is probably better than most humans at doing Fermi estimates given 10 minutes of time. Especially in domains one is unfamiliar with, since it has so much more breadth.

I would be interested to see what one would get out of actually making a high quality dataset by taking Fermi estimates from people I deem to produce high quality work in that area. 

Comment by Morpheus on D0TheMath's Shortform · 2024-03-20T11:49:08.074Z · LW · GW

Not exactly what you were looking for, but recently I noticed that there were a bunch of John Wentworth's posts that I had been missing out on that he wrote over the past 6 years. So if you get a lot out of them too, I recommend just sorting by 'old'. I really liked don't get distracted by the boilerplate (The first example made something click about math for me that hadn't clicked before, which would have helped me to engage with some “boilerplate” in a more productive way.). I also liked constraints and slackness, but I didn't go beyond the first exercise yet. There's also more technical posts that I didn't have the time to dig into yet.

bhauth doesn't have as long a track record, but I got some interesting ideas from his blog which aren't on his lesswrong account. I really liked proposed future economies and the legibility bottleneck.

Comment by Morpheus on Useful Vices for Wicked Problems · 2024-03-07T04:49:05.543Z · LW · GW

This post warms my heart. Thank you.

Comment by Morpheus on Alex_Altair's Shortform · 2024-03-07T01:19:54.637Z · LW · GW

The pdf linked by @CstineSublime is definitely towards the textbook. I’ve started reading it and it has been an excellent read so far. Will probably write a review later.

Comment by Morpheus on Morpheus's Shortform · 2024-03-06T06:02:37.254Z · LW · GW

While there is currently a lot of attention on assessing language models, it puzzles me that no one seems to be independently assessing the quality of different search engines and recommender systems. Shouldn't this be easy to do? The only thing I could find related to this is this Russian site (It might be propaganda from Yandex, as it is listed as the top quality site?). Taking their “overall search quality” rating at face value does seem to support the popular hypothesis that search quality of Google has slightly deteriorated over the last 10 years (although compared to 2009-2012, quality has been basically the same according to this measure). Overall search result quality.

The gpt-4 translated version of their blog states that they gave up actively maintaining this project in 2014, because search engine quality had become reasonable according to them:

For the first time in the history of the project, we have decided to shut down one of the analyzers: SEO pressing as a phenomenon has essentially become a thing of the past, and the results of the analyzer have ceased to be interesting.

Despite the fact that search engine optimization as an industry continues to thrive, search engine developers have made significant progress in combating the clutter of search results with specially promoted commercial results. The progress of search engines is evident to the naked eye, including in the graph of our analyzer over the entire history of measurements:

commercial results

SEO Pressing Analyzer Graph

The result of the analyzer is the share of commercial sites in the search results for queries that do not have a clearly commercial meaning; when there are too many such sites in the search results, it is called susceptibility to SEO pressing. It is easy to see that a few years ago, more than half (sometimes significantly more than half) of the search results from all leading search engines consisted of sites offering specific goods or services. This is, of course, a lot: a query can have different meanings, and the search results should cater to as many of them as possible. At the same time, a level of 2-3 such sites seems decent, since a user who queries "Thailand" might easily be interested in tours to that country, and one who queries "power station" might be interested in power stations for a country home.

If we are worried that current recommender systems are already doing damage and expect things to get worse in the future, it might be good to actively monitor this to not get frog boiled.

Comment by Morpheus on Morpheus's Shortform · 2024-03-04T04:56:36.394Z · LW · GW

Metaculus recently updated the way they score user predictions. For anyone who used to be active on Metaculus and hasn't logged on for a while, I recommend checking out your peer and baseline accuracy scores in the past years. With the new scoring system, you can finally determine whether your predictions were any good compared to the community median. This makes me actually consider using it again instead of Manifold.

By the way, if you are new to forecasting and want to become better, I would recommend past-casting and/or calibration games instead, because of the faster feedback loops. Instead of within weeks, you'll know within 1–2 hours whether you tend to be overconfident or underconfident.

Comment by Morpheus on Approaching Human-Level Forecasting with Language Models · 2024-03-01T09:42:30.355Z · LW · GW

Something like this sounds really useful just for my personal use. Is someone finetuning (or already finetuned) a system to be generally numerate and good at fermi estimates? Just my bad prompting skills on gpt-4 gave pretty mediocre results for that purpose.

Comment by Morpheus on ask me about technology · 2024-02-29T02:50:45.536Z · LW · GW

Do you have any views on the most promising avenues for human intelligence enhancement through biology? I'd be most interested in approaches that would give us (humanity) better odds in worlds where AI takes off in the next 1–15 years.

Comment by Morpheus on CFAR Takeaways: Andrew Critch · 2024-02-28T01:03:52.215Z · LW · GW

Rationality seems to be missing an entire curriculum on "Eros" or True Desire.

I got this curriculum from other trainings, though. There are places where it's hugely emphasized and well-taught.

What are these places?

Comment by Morpheus on How I internalized my achievements to better deal with negative feelings · 2024-02-28T00:00:39.978Z · LW · GW

This information and introspective techniques like Focusing helped me discover that these negative feelings came from some unmet need to feel worthwhile and recognized, but the problem was that I heavily tied my self-worth to the amount of progress I made.

Oops! And thanks! Somehow this articulated the uneasy relationship I had noticed in myself with regards to impact in a way that I feel like I can finally adress it.

Comment by Morpheus on shoes with springs · 2024-02-24T11:26:16.881Z · LW · GW

Reminds me of this

Comment by Morpheus on I'd also take $7 trillion · 2024-02-24T09:56:05.208Z · LW · GW

AI is something I've thought about a lot, but I think I've already posted everything about that that I want to, and people didn't seem to appreciate this that much.

Thanks for linking it! I think one reason I bounced off this article the first time was that I had pattern matched it from the title with the abundant posts on this platform that mostly distill existing arguments.

Comment by Morpheus on CFAR Takeaways: Andrew Critch · 2024-02-15T08:29:08.956Z · LW · GW

Causal Diagramming

  1. or, some kind of external media, other than writing

Anyone knows a nice way to drill this skill? I was just reading one of Steven Byrne's posts which made me notice that he is good at this and that I lack this skill currently? Also reminds me of Thinking in Systems which I read, found cool and then mostly went about my life not really applying this too much. I think I have a pretty good intuitive understanding of statistical causal relationships and have thought a lot about confounders. But I've never felt compelled to whip out a diagram.

Comment by Morpheus on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:15:19.544Z · LW · GW

I'd also bet $50 as a gatekeeper. I won this game as a gatekeeper before and now need someone to put my ego in place. I'd prefer to play against someone who won as the AI before.

This post prompted me to wonder to which degree there might be publication bias going on in that people don't report when they "predictably" won as the gatekeeper (as I did).

Comment by Morpheus on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:14:38.011Z · LW · GW
Comment by Morpheus on Leading The Parade · 2024-01-31T23:03:37.823Z · LW · GW

People occasionally come up with plans like "I'll lead the parade for a while, thereby accumulating high status. Then, I'll use that high status to counterfactually influence things!". This is one subcategory of a more general class of plans: "I'll chase status for a while, then use that status to counterfactually influence things!". Various versions of this often come from EAs who are planning to get a machine learning PhD or work at one of the big three AI labs.

Even if you need status, it might be easier to just be or become friends with the people who already have status and credentials, and they can lend you their status if they think your plan/idea is good. For example, when you are writing a letter to a politician or founding a new org/startup.

Comment by Morpheus on Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B? · 2024-01-31T07:29:01.989Z · LW · GW

I reran the experiments from your first notebook, and it seems like davinci-002 is incapable of this task, at least with this prompt (Code).
(Ugly) image

Comment by Morpheus on David Burns Thinks Psychotherapy Is a Learnable Skill. Git Gud. · 2024-01-30T21:16:02.307Z · LW · GW

Yes, I also found that fishy. I tried finding negative reviews from patients online, but had a hard time with queries, because I didn't know how to exclude reviews of his book properly.

Comment by Morpheus on Will quantum randomness affect the 2028 election? · 2024-01-25T06:22:27.503Z · LW · GW

I could see an argument that quantum randomness does not have a large effect for the particular election in 2028, as long as no one is actively trying to make this prediction wrong. I have no idea about US politics right now, so I have no clue if it is already foreseeable that margins are going to be large.

At least the election in 2000 would fail this test, though (As discussed on Metaculus). Weather alone could have turned things around on election day would be my guess. First past the post voting makes close margins more likely, and the electoral college makes that different weather in just 1 large swing state can turn things around.

One thing I find interesting how the US election is probably a big amplifier of the relevance of quantum randomness in a lot of other areas. Makes me wonder if there are other identifiable sources of chaos amplification that I don't have on my radar yet. Perhaps the spread of memes? Especially since politics is operating so much on level 4. I would guess there would be memes that once around would not be as easily be affected through equal and opposite effects.

Comment by Morpheus on From Finite Factors to Bayes Nets · 2024-01-24T11:19:36.346Z · LW · GW

Neat!

I plugged 15 and 1617 into OEIS which I assume you have tried already, but only got “recreational math” results.

Comment by Morpheus on How to Find a Problem · 2024-01-24T08:24:23.389Z · LW · GW

This is all you can really do. You cannot force yourself to have important ideas, only put your brain in the right place to have them. This is a technique which can be practised and improved on. So go and do it.

This thought is alleviating a lot of pressure from my mind! Love this sequence.

Comment by Morpheus on Natural Latents: The Math · 2024-01-22T06:17:45.990Z · LW · GW

Oops!

Comment by Morpheus on Natural Latents: The Math · 2024-01-22T04:40:40.005Z · LW · GW

Just pasting this into a calculator your expressions don't seem to be equivalent:

  • first expression e1: (((b∧c)=>a) ∧ ((c ∧ d) => b) ∧ (d ∧ b => c) ∧ ((a ∧ d) =>(b or c)))
  • second expression e2: ((b∧c)=>a)∧(d =>((a=b)=c))

e1 = e2: (a ∧ b) ∨ (a ∧ c) ∨ (b ∧ c) ∨ ¬ d

e1 simplifies to (a ∧ b ∧ c) ∨ (¬ a ∧ ¬ b ∧ ¬ c) ∨ (¬ b ∧ ¬ d) ∨ (¬ c ∧ ¬ d)

Comment by Morpheus on Bayesians Commit the Gambler's Fallacy · 2024-01-07T16:47:44.415Z · LW · GW
Comment by Morpheus on 2023 Unofficial LessWrong Census/Survey · 2023-12-30T00:49:17.290Z · LW · GW

I took the survey! Fun!

Comment by Morpheus on Anki setup best practices? · 2023-12-26T02:26:18.837Z · LW · GW

I can't find the source right now, but there is a good article on this that also goes into the tradeoff of recall vs. number of reviews. 95% recall is too high. 70% too low. 85% is the sweet spot iirc.

Comment by Morpheus on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T01:03:07.962Z · LW · GW

I am confused. I agree with the above scenario, but disagree that the focus is a bias. Sure, for human society the linear speed-up scale is important, but for the dynamics of the intelligence explosion the log-scale seems more important. By your own account, we would rapidly move to a situation, where the most capable humans/institutions are in fact the bottleneck. As anyone who is not able to keep up with the speed of their job being automated away is not going to contribute a lot on the margin of intelligence self-improvement. For example, OpenAI/Microsoft/Deepmind/Anthropic/Meta deciding in the future to design and manufacture their chips in house, because NVIDIA can't keep up etc… I don’t know if I expect this would make NVIDIA's stock tank before the world ends. I expect everyone else to profit from slowly generating mundane utility from general AI tools, as is happening today.

Comment by Morpheus on What is the evidence on the Church-Turing Thesis? · 2023-12-03T18:32:26.624Z · LW · GW
Comment by Morpheus on What is the evidence on the Church-Turing Thesis? · 2023-12-03T18:26:30.241Z · LW · GW
Comment by Morpheus on What is the evidence on the Church-Turing Thesis? · 2023-12-03T18:25:19.435Z · LW · GW
Comment by Morpheus on What is the evidence on the Church-Turing Thesis? · 2023-12-03T18:09:53.807Z · LW · GW
Comment by Morpheus on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T20:35:28.350Z · LW · GW

My problem is you are mixing up real and not real things. They are different. The whole post above assumes a civilization of way more sanity from people in power and the people watching them than the one we live in.

Comment by Morpheus on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T19:45:00.613Z · LW · GW

You may not engage in homosexual activity because of the externality of God smiting the city and/or sending a hurricane.

Well the problem is god isn't real.

You may not eat hamburgers because of the externality of catastrophic climate collapse.

Your hamburger becomes slightly more expensive because there is a carbon tax.

I would say your examples are abusing the concept (And I have seen them before because people make trashy arguments all the time). The concept itself makes lots of sense.

Comment by Morpheus on Insulate your ideas · 2023-11-24T19:17:29.115Z · LW · GW

With this in mind, consider the world through the eyes of an ancient lich or thousand-year-old vampire. It's a worldview in which ephemeral gains are irrelevant. All that matters is permanent, generalizable knowledge - everything else will fade in time, and usually not even very much time. In this worldview, gears-level understanding is everything.

Comment by Morpheus on More Dakka · 2023-11-03T21:16:42.768Z · LW · GW

Thank you for writing this post. This gave me the framework and motivation to overcome the trivial inconveniences of reading a wiki and writing an email to sign up for the free gym room in my dorm. Another inconvenience that had scared me away was an appointment to get introduced to how to use the gym. I will start learning to use the barbell there rather than just using the dumbbell I own. Now that I am writing this it sounds quite insane that I didn't take advantage of this for 3 years.

Comment by Morpheus on Thomas Kwa's MIRI research experience · 2023-10-07T15:48:48.617Z · LW · GW

Ups. Yeah I forgot to address that one. I was just astonished to hear no one knows the answer to that one.

Comment by Morpheus on Thomas Kwa's MIRI research experience · 2023-10-07T07:54:20.586Z · LW · GW

we still can’t even define “life” in a reasonable way, or answer basic questions like “why do arms come out basically the same size?”

Such a definition seems futile (I recommend the rest of the word sequence also). Biology already does a great job explaining what and why some things are alive. We are not going around thinking a rock is "alive". Or what exactly did you have in mind there?

Comment by Morpheus on Weighing Animal Worth · 2023-10-05T20:08:36.000Z · LW · GW

I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always.

What remains? I think this is basically what I usually think of as my own  (it just happens to contain a term with everyone else's). Are you sacrificing what other people think 'the right thing to do' is? What future you think what the right thing to do would have been? What future uplifted Koi think what the right thing to do would have been?

Comment by Morpheus on Weighing Animal Worth · 2023-10-05T19:43:27.099Z · LW · GW

I agree. I think for me, the intuition mostly stems from neuron count. I also agree, with the authors of the sequence, that neuron counts are not an ideal metric. What confuses me is that instead these estimates seem to simply take biologic “individuals” as a basic unit for moral weight and then adjust with uncertainty from there. I think that seems even more misguided than neuron count. Bees and Ants are hacking the “individual”-metric just by having small brains spread over lots of individual bees/ants. Beehive > Human seems absurd.

Comment by Morpheus on Fifty Flips · 2023-10-03T21:49:05.194Z · LW · GW

There is no possible way for a real coin to have that distribution.

 

Unless the person throwing read Jaynes:

a person familiar with the laws of mechanics can toss a biased coin so that it will produce predominantly either heads or tails, at will. [...] From the fact that we have seen a strong preponderance of heads, we cannot conclude legitimately that the coin is biased; it may be biased, or it may have been tossed in a way that systematically favors heads. Likewise, from the fact that we have seen equal numbers of heads and tails, we cannot conclude legitimately that the coin is ‘honest’. It may be honest, or it may have been tossed in a way that nullifies the effect of its bias.

More on how:

An important feature of this tumbling motion is conservation of angular momentum;
during its flight the angular momentum of the coin maintains a fixed direction in space (but
the angular velocity does not; and so the tumbling may appear chaotic to the eye). Let us
denote this fixed direction by the unit vector n; it can be any direction you choose, and it
is determined by the particular kind of twist you give the coin at the instant of launching.
Whether the coin is biased or not, it will show the same face throughout the motion if viewed
from this direction (unless, of course, n is exactly perpendicular to the axis of the coin, in
which case it shows no face at all).
Therefore, in order to know which face will be uppermost in your hand, you have only
to carry out the following procedure. Denote by k a unit vector passing through the coin
along its axis, with its point on the ‘heads’ side. Now toss the coin with a twist so that k and
n make an acute angle, then catch it with your palm held flat, in a plane normal to n. On
successive tosses, you can let the direction of n, the magnitude of the angular momentum,
and the angle between n and k, vary widely; the tumbling motion will then appear entirely
different to the eye on different tosses, and it would require almost superhuman powers of
observation to discover your strategy.
Thus, anyone familiar with the law of conservation of angular momentum can, after some
practice, cheat at the usual coin-toss game and call his shots with 100% accuracy. You can
obtain any frequency of heads you want – and the bias of the coin has no influence at all
on the results!

Comment by Morpheus on Weighing Animal Worth · 2023-09-30T23:26:14.286Z · LW · GW

I agree with the first, but not the second sentence. (Although I am not sure what it is supposed to imply. I can imagine being convinced Shrimps are more important, but the bar for evidence is pretty high)