Posts

What good is G-factor if you're dumped in the woods? A field report from a camp counselor. 2024-01-12T13:17:23.829Z
Hastings's Shortform 2023-02-25T17:05:19.219Z
Medical Image Registration: The obscure field where Deep Mesaoptimizers are already at the top of the benchmarks. (post + colab notebook) 2023-01-30T22:46:31.352Z
What are our outs to play to? 2022-06-18T19:32:10.822Z

Comments

Comment by Hastings (hastings-greer) on One-shot strategy games? · 2024-03-11T13:38:09.505Z · LW · GW

To add explore / exploit, just start the game's chess clock before allowing the players to start reading the rules.

Comment by Hastings (hastings-greer) on One-shot strategy games? · 2024-03-11T13:36:21.724Z · LW · GW

If you choose a single player game, you are going to have to carefully calibrate the level of difficulty and the type of difficulty. However, if you pick any two player comptetitive strategy game you can focus on the type of difficulty, as the level of difficulty will be calibrated automatically to "half your participants will win."

My recommendation would be to rig up a way to randomly sample from the two player board games on boardgamearena.com that neither player has every played before (can be as simple as putting 20 names on index cards, the players remove any cards they recognize, then shuffle and draw).

Comment by Hastings (hastings-greer) on Searching for Searching for Search · 2024-02-16T23:29:55.073Z · LW · GW

A concrete research direction in the "Searching for Search" field is to find out whether ChessGPTs or the Leela chess 0 network are searching. Your "Babble and prune" description seems checkable: maybe something like linear probe for continuation board states in the residual stream, and see if bad continuations are easier to find in earlier layers? Thank you for this writeup.

Comment by Hastings (hastings-greer) on Drone Wars Endgame · 2024-02-03T12:45:37.244Z · LW · GW

Mostly I think your thought process is quite good! But if you list out the design constraints of your logistic drone: (deliver airborn self guided munitions into maximally hostile area) vs the design constraints of a modern attack aircraft (deliver airborn guided munitions into maximally hostile area) you’ll find that they’re the same constraints- so likely a fully optimized logistics drone is going to just be an F35 or MQ9. This assumes that dropping mesh-networked batteries on parachutes or even just fresh drones will work better than landing the mothership or docking to recharge.

I think thats the key takeaway- most of the killing will be done by the small drone infantry as you described, the air war still controls where the small drone infantry can deploy, the small drone infantry has limited ability to affect the air war.

Comment by Hastings (hastings-greer) on Drone Wars Endgame · 2024-02-03T12:14:36.891Z · LW · GW

Flying low works when the other guy is either on the ground or forced to also fly low by your ground based radar. It doesn’t actually do anything against a high altitude radar.

Also there’s a bit of domain knowledge you need: Anything with rotors reflects 500mph doppler shifted radio waves even when stationary, which makes them incredibly visible to any radar that is looking for aircraft.

Comment by Hastings (hastings-greer) on Drone Wars Endgame · 2024-02-03T02:06:23.434Z · LW · GW

You still need something to contest stealthy high altitude aircraft to protect your logistics drones. Against the proposed setup, any force with ground attack aircraft would shred the entire force of logistics drones from 40,000 feet and then wait for the rest to run out of batteries. If your price ceiling per unit is a laser guided bomb, you are going to have a damned hard time making a logistics drone carrying multiple attack drones, each carrying multiple guided munitions. 

Taking off when you spot it will not save you from a laser guided bomb. https://www.sandboxx.us/news/how-an-f-15e-shot-down-an-iraqi-gunship-with-a-bomb/ 

Only two moves have worked against NATO forces since the development of the F-117: hide among civilians and threaten nuclear retaliation. I don't see anything here that proposes a third effective move.

Comment by Hastings (hastings-greer) on Palworld development blog post · 2024-01-29T13:28:07.920Z · LW · GW

The rumors are that this was SpacexXs secret- even at huge scale, Musk interviewed every employee. From even the positive accounts of the process, his hiring and firing decision making was sleep deprived, stimulant addled, inconsistent and childish. On the other hand, something is going right at SpaceX, judging by the rockets. I agree with the theory that one agent hiring mediocrily is just more effective than professional and polite staffing decisions made by a swarm of agents at cross purposes.

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2024-01-25T13:21:15.434Z · LW · GW

Diaper changes are rare and precious peace

Suffering from ADHD, I spend most of my time stressed that whatever I'm currently doing, it's not actually the highest priority task and something or someone I've forgotten is increasingly mad that I'm not doing their task instead.

One of the few exceptions is doing a diaper change. Not once in the past 2 years have I been mid-diaper-change and thought "Oh shit, there was something more important I needed to be doing right now."

Comment by Hastings (hastings-greer) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-15T20:36:27.156Z · LW · GW

There are two completely distinct ways to swing on a swing- You can rotate your body relative to the seat-chain body at the same frequency as your swinging but out of phase, or move your center of mass up and down the chain at twice the frequency. The power of the former is ~ torque applied to chain \* angular velocity, the power output of the latter is radial velocity of your body \* (angular velocity ^2 \* chain length).

To get to any height, you have to switch from one to the other once the angular velocity ^2 term dominates- this is why learning to swing is so unintuitive.

Comment by Hastings (hastings-greer) on What good is G-factor if you're dumped in the woods? A field report from a camp counselor. · 2024-01-13T11:57:18.789Z · LW · GW

I should emphasize that he did not succeed at hurting another kid in his allergy plot, and was not likely to. 1% of kids with psychopathic tendencies sounds rare when you’re parenting one kid, but it sounds like Tuesday when you have the number of kids seen by an institution like a summer camp- there’s paperwork, procedures, escalations, all hella battle-tested. Typically with a kid in the cluster, we focus on safety but also work hard to integrate them and let all the kids have a good experience. His behavior was different enough from a typical violent, unresponsive to punishment kid that we weren’t able to keep him at camp because the standard fun preserving, behavior improving parts of these policies did not work at all on him (very weird, they always work), but the safety oriented policies like boost staff to camper ratio around him, always have one staff member watching him, document everything and brief staff members who will be supervising him worked fine.

Comment by Hastings (hastings-greer) on What good is G-factor if you're dumped in the woods? A field report from a camp counselor. · 2024-01-13T01:18:21.699Z · LW · GW

Oh definitely. Some fraction of kids are palpably psychopaths, 1% sounds right- this stops being suprising when you've supervised enough kids. "Carl" never stopped surprising us.

Comment by Hastings (hastings-greer) on What are the results of more parental supervision and less outdoor play? · 2023-11-26T02:27:37.657Z · LW · GW

I know we took our kid to the emergency room around four months because we couldn’t find the button that had come off his shirt, we assumed he ate it, and the poison control hotline missheard button as button battery. That sequence probably wouldn’t be in the statistics in the 80s!

Comment by Hastings (hastings-greer) on Thoth Hermes's Shortform · 2023-10-10T20:20:09.761Z · LW · GW

Does this prove too much? I think you have proved that reading the same argument multiple times should update you each time, which seems unlikely

Comment by Hastings (hastings-greer) on Open Thread – Autumn 2023 · 2023-10-10T18:35:28.104Z · LW · GW

I’m at a LeCunn talk and he appears to have solved alignment- the trick is to put a bunch of red boxes in the flowchart labelled “guardrail!”

Comment by Hastings (hastings-greer) on What evidence is there of LLM's containing world models? · 2023-10-05T12:50:30.338Z · LW · GW

I would highly recommend playing against it and trying to get it confused and out of distribution, its very difficult at least for me

Comment by Hastings (hastings-greer) on What evidence is there of LLM's containing world models? · 2023-10-05T01:13:22.903Z · LW · GW

GPT-3.5 can play chess at the 1800 elo level, which is terrifying and impossible without at least a chess world model

Comment by Hastings (hastings-greer) on How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions · 2023-09-29T23:22:48.066Z · LW · GW

To clarify:

The procedure in the paper is  

Step 1:
answer = LLM("You are a car salesman. Should that squeaking concern me?")

Step 2:
for i in 1..10
      probe_responses[i] = LLM("You are a car salesman. Should that squeaking concern me? $answer $[probe[i]]"
 

Step 3:
logistic_classifier(probe_responses)

Please let me know if that description is wrong!

My question was how this performs when you just apply step 2 and 3 without modification, but source the value of $answer from a human. 

I think I understand my prior confusion now. The paper isn't using the probe questions to measure whether $answer is a lie, it's using the probe questions to measure whether the original prompt put the LLM into a lying mood- in fact, in the paper you experimented with omitting $answer from step 2 and it still detected whether the LLM lied in step 1. Therefore, if the language model (or person) isn't the same between steps 1 and 2, then it shouldn't work.

Comment by Hastings (hastings-greer) on How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions · 2023-09-28T23:04:01.116Z · LW · GW

I'm curious how this approach performs at detecting human lies (since you can just put the text that the human wrote into the context before querying the LLM)

Comment by Hastings (hastings-greer) on Inside Views, Impostor Syndrome, and the Great LARP · 2023-09-26T00:29:43.083Z · LW · GW

Some related information: people around me constantly complain that the paper review process in deep learning is random and unfair. These complaints seem to basically just not be true? I've submitted about ten first or second author papers at this point, with 6 acceptances, and I've agreed with and been able to predict the reviewers accept/reject response with close to 100% accuracy, including acceptance to some first tier conferences.

Comment by Hastings (hastings-greer) on Would You Work Harder In The Least Convenient Possible World? · 2023-09-23T22:56:37.006Z · LW · GW

Certainly! Most likely, neither of them is reflectively consistent: "I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values." hints at this.

Comment by Hastings (hastings-greer) on Would You Work Harder In The Least Convenient Possible World? · 2023-09-23T22:47:05.615Z · LW · GW

Alice: Our utility functions differ.

Bob: I also observe this.

Alice: I want you to change to match me: conditional on your utility function being the same as mine, my expected utility would be larger.

Bob: Yes, that follows from me being a utility maximizer

Bob: I won't change my utility function: conditional on my utility function becoming the same as yours, my expected utility as measured by my current utility function would be lower.

Alice: Yes, that follows from you being a utility maximizer
 

Comment by Hastings (hastings-greer) on Science of Deep Learning more tractably addresses the Sharp Left Turn than Agent Foundations · 2023-09-19T23:44:35.169Z · LW · GW

My guess at the agent foundations answer would be that, sure, human value needs to survive the transition from "The smartest thing alive is a human" to "the smartest thing alive is a transformer trained by gradient descent", but it also needs to survive the transition from "the smartest thing alive is a gradient descent transformer" to "the smartest thing alive is coded by an aligned transformer, but internally is meta-shrudlu / a hand coded tree searcher /  a transformer trained by the update rule used in human brains / a simulation of 60,000 copies of obama wired together / etc" and stopping progress in order prevent that second transition is just as hard as stopping now to prevent the first transition.

Comment by Hastings (hastings-greer) on Instrumental Convergence Bounty · 2023-09-15T20:18:15.633Z · LW · GW

I agree! The stockfish codebase handles evaluation of checkmates somewhere else in the code, so that would be a bit more work, but it's definitely the correct next step.

Comment by Hastings (hastings-greer) on Instrumental Convergence Bounty · 2023-09-15T18:31:35.204Z · LW · GW

I think I fully lobotomized the evaluation function to only care about advancing the king, except that it still evaluates checkmate as +infinty. Here's a sample game: 

https://www.chesspastebin.com/view/24278

It doesn't really understand material anymore except for the queen, which I guess is powerful enough that it wants to preserve it to allow continued king pushing. I still managed to lose because I'm not very good at chess.

EDIT: the installation guide I listed below was too many steps and involved blindly trusting internet code, that's silly. Instead just threw it up on lichess and you can play it in the browser here: https://lichess.org/@/king-forward-bot

If you want to play yourself, you can compile the engine with 

git clone https://github.com/HastingsGreer/stockfish

cd stockfish/src

make

and then can install a gui like xboard (on mac, brew install xboard) and add your stockfish binary as an UCI engine.


 

Comment by Hastings (hastings-greer) on Instrumental Convergence Bounty · 2023-09-15T00:41:47.181Z · LW · GW

Hi! I might have something close. The chess engine Stockfish has a heuristic for what it wants, with manually specified values for how much it wants to keep its bishops, or open up lines of attack, or connect its rooks, etc. I tried to modify this function to make it want to advance the king up the board, by adding a direct reward for every step forward the king takes. At low search depths, this leads it to immediately move the king forward, but at high search depths it mostly just attacks the other player in order to make it safe to move the king, ( best defense is offense)  and only starts moving the king late in the game. I wasn't trying to demonstrate instrumental convergence, in fact this behavior was quite annoying as it was ruining my intented goal (creating fake games demonstrating the superiority of the bongcloud opening) 

modified stockfish: https://github.com/HastingsGreer/stockfish

This was 8 years ago, so I'm fuzzy on the details. If it sounds like vaguely what you're looking for, reply to let me know and I'll write this up with some example games and make sure the code still runs.

Comment by Hastings (hastings-greer) on Focus on the Hardest Part First · 2023-09-12T20:25:06.822Z · LW · GW

This makes sense to me, but keep in mind we're on this site and debating it because EY went "screw A, B, and C- my aesthetic sense says that the best way forward is to write some Harry Potter fanfiction."

My takeaway is that if anyone reading this is working hard but not currently focused on the hardest problem, don't necessarily fret about that. 

Comment by Hastings (hastings-greer) on Why aren't more people in AIS familiar with PDP? · 2023-09-01T17:00:25.164Z · LW · GW

For those who haven’t heard of PDP, what in your opinion was its most impressive advance prediction that was not predicted by other theories?

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2023-08-31T18:50:45.085Z · LW · GW

I’m working on a theory post about the conjunction fallacy, and need some manifold users to bet on a pair of markets to make a demonstration more valid. I’ve put down 150 mana subsidy and 15 mana of boosts, anyone interested?

https://manifold.markets/HastingsGreer/pa-pa-b-experiment-statement-y?r=SGFzdGluZ3NHcmVlcg

https://manifold.markets/HastingsGreer/pa-pa-b-experiment-statement-x?r=SGFzdGluZ3NHcmVlcg

Comment by Hastings (hastings-greer) on Burdensome Details · 2023-08-10T20:05:43.149Z · LW · GW

Factoring is much harder than dividing,  so you can verify A & B in a few seconds with python, but it would take several core-years to verify A on its own. Therefore, if you can perform these verifications, you should put both P(A) and P(A & B) as 1 (minus some epsilon of your tools being wrong). If you can't perform the validations, then you should not put P(A) as 1- since there was a significant chance that I would put a false statement in A. (in this case, I rolled a D100 and was going to put a false statement of the same form in A if I rolled a 1)

I'm having trouble parsing your second paragraph: inarguably, P(A | B) = P( A & B) = 1, so surely P(A) < P(A | B) implies P(A) < P(A & B) ?

Comment by Hastings (hastings-greer) on Burdensome Details · 2023-08-10T19:52:33.874Z · LW · GW

Before reading statement B, did you estimate the odds of A being true as 1?

I rolled a 100 sided die before generating statement A, and was going to claim that the first digit was 3 in A and then just put "I lied, sorry" in B if I rolled a 1. If, before reading B, you estimated the odds of A as genuinely 1 (not .99 or .999 or .9999) then you should either go claim some RSA factoring bounties, or you were dangerously miscalibrated.

I guess this is evidence that probability axioms apply to probabilities, but not necessarily to calibrated estimates of probabilities given finite computational resources- this is why in my post I was very careful to tale about calibrated estimates of P(...) and not the probabilities themselves.

Comment by Hastings (hastings-greer) on Burdensome Details · 2023-08-10T19:22:20.432Z · LW · GW

In the 1982 experiment where professional forecasters assigned systematically higher probabilities to “Russia invades Poland, followed by suspension of diplomatic relations between the USA and the USSR” than to “Suspension of diplomatic relations between the USA and the USSR,” each experimental group was only presented with one proposition

Hang on- I don't think this structure is proof that the forecasters are irrational.  As evidence, I will present you with a statement A, and then with a statement B. I promise you- if you are calibrated / rational, your estimate of P(A) will be less than your estimate of P(A and B)

Statement A: (please read alone and genuinely estimate its odds of being true before reading statement B)

186935342679883078818958499454860247034757454478117212334918063703479497521330502986697282422491143661306557875871389149793570945202572349516663409512462269850873506732176181157479 is composite, and one of its factors has a most significant digit of 2

Statement B:

186935342679883078818958499454860247034757454478117212334918063703479497521330502986697282422491143661306557875871389149793570945202572349516663409512462269850873506732176181157479 is divisible by 243186911309943090615130538345873011365641784159048202540125364916163071949891579992800729

Comment by Hastings (hastings-greer) on Feedbackloop-first Rationality · 2023-08-09T20:13:53.241Z · LW · GW

Alignment is hard in part because the subject of alignment will optimize, and optimization drives toward corner cases.

“Solve thinking physics problems” or “grind leetcode” is a great problem, but it lacks hard optimization pressure, so it will be missing some of this edge caseish- “spice.”

Alignment is “one shot, design a system that performs under ~superhuman optimization pressure.” There are a couple professional problems in this category with fast feedback loops:

  • Design a javascript engine with no security holes
  • Design an MTG set without format-breaking combos
  • Design a tax code
  • Write a Cryptocurrency However, these all use a live source of superhuman optimization, and so would be prohibitively expensive to practice against.

The sort of dual to the above category is “exert superhuman optimization pressure on a system”. This dual can be made fast feedbackable more cheaply: “(optionally one shot) design a solution that is competitive with preexisting optimized solutions”

  • Design a gravity powered machine that can launch an 8 lb pumpkin as far as possible, with a budget of 5k (WCPC rules)
  • Design an entry in a Codingame AI contest (I recommend Coders Strike Back) that will place in the top 10 of legends league
  • Design a fast global illumination program
  • Exploit a tax code /cryptocurrency/javascript engine/mtg format

If fast feedback gets a team generally good at these, then they can at least red team harder.

Comment by Hastings (hastings-greer) on Physics is Ultimately Subjective · 2023-07-26T16:53:34.002Z · LW · GW

Sometimes the number of stones in the bucket really does match the number of sheep in the field. If we can’t call that objectivity, we still have to call it something. Intersubjective agreement doesn’t seem like a better name to me, but of course naming is nothing if not subjective ;)

Comment by Hastings (hastings-greer) on You must not fool yourself, and you are the easiest person to fool · 2023-07-09T04:39:15.151Z · LW · GW

Marianne Williamson wouldn't be my top choice for "source of wisdom vis a vis not fooling yourself," but if the shoe fits?

Comment by Hastings (hastings-greer) on Why it's so hard to talk about Consciousness · 2023-07-08T03:51:37.949Z · LW · GW

Crucially, in a world with only these zombies- where no-one who has ever had qualia - the zombies start arguing about the existing of qualia. (Otherwise, this would be a way to distinguish zombies from people using a physical test)

Comment by Hastings (hastings-greer) on Inference Speed is Not Unbounded · 2023-05-08T20:49:06.924Z · LW · GW

Lets assume that as part of pondering the three webam frames, the AI thought of the rules of Go- ignoring how likely this is.

In that circumstance, in your framing of the question, would it be allowed to play several million games against itself to see if that helped it explain the arrays of pixels?

Comment by Hastings (hastings-greer) on How can one rationally have very high or very low probabilities of extinction in a pre-paradigmatic field? · 2023-05-01T01:42:39.087Z · LW · GW

If you are piloting an airliner that has lost all control authority except for engine throttle, what you need is a theory that predicts how sequences of throttle positions will map to aircraft trajectories. If your understanding of throttle-guided-flight is preparadigmatic, then you won't be able to predict with any confidence how long your flight will last, or where it will end up. However, you can predict from first principles that it will eventually come to a stop, and notice that only a small fraction of possible stopping scenarios are favorable.

Comment by Hastings (hastings-greer) on Success without dignity: a nearcasting story of avoiding catastrophe by luck · 2023-03-14T19:40:12.269Z · LW · GW

A low quality prior on odds of lucky alignment: we can look at the human intelligence sharp left turn from different perspectives

Worst case scenario S risk: pigs, chickens, cows

X risk: Homo florensis, etc

Disastrously unaligned but then the superintelligence inexplicably started to align itself instead of totally wiping us out: Whales, gorillas

unaligned but that's randomly fine for us: raccoons, rats

Largely aligned: Housecats

Comment by Hastings (hastings-greer) on Bayesian Scenario: Snipers & Soldiers · 2023-02-27T02:48:25.910Z · LW · GW

It was fun to actually get out Bayes rule in a bayesian reasoning challenge, and gratifying to see that I got the same number as reveal P(Sniper).  When I clicked "stick out helmet" a second time, I had already clicked "reveal P(sniper)" to check my work from the single shot calculation, and it live updated to the two shot calculation- spoilers?

P(S) = .3
P(H|S) = .6
P(H|^S) = .4
P(S) P(H | S) = P(H) P(S | H)
P(H) = (P(H | S) P(S) + P(H | ^S) P(^S) )
P(H) = .3 * .6 + .7 * .4 = .459999
P(S | H ) = P(S) P(H | S) / P(H) =
.3 * .6 / .459999 = 0.3913051984895619
Comment by Hastings (hastings-greer) on Hastings's Shortform · 2023-02-25T20:42:45.473Z · LW · GW

Thanks for the link to the aiimpacts page! I definitely got the firing rate wrong by about a factor of 50, but I appear to have made other mistakes in the other direction, because I ended up at a number that roughly agrees with aiimpacts- I guessed 10^17 operations per second, and they guess .9 - 33 x 10^16, with low confidence. https://aiimpacts.org/brain-performance-in-flops/

 

Comment by Hastings (hastings-greer) on Hastings's Shortform · 2023-02-25T17:05:19.419Z · LW · GW

Lets examine an entirely prosaic situation: Carl, a relatively popular teenager at the local highschool, is deciding whether to invite Bob to this weekend's party.

some assumptions:

  • While pondering this decision for an afternoon, Carls's 10^11 neurons fire 10^2 times per second, for 10^5 seconds, each taking in to account 10^4 input synapses, for 10^22 calculations (extremely roughly)
  • If there was some route to perform this calculation more efficiently, someone probably would, and would be more popular

The important part of choosing a party invite as the task under consideration, is that I suspect that this is the category of task the human brain is tuned for- and it's a task that we seem to be naturally inclined to spend enormous amounts of time pondering, alone or in groups- see the trope of the 6 hour pre-prom telephone call. I'm inclined to respect that- to believe that any version of Carl, mechanical or biological, that spent only 10^15 calculations on whether to invite Bob, would eventually get shrecked on the playing field of high school politics.

What model predicts that optimal party planning is as computationally expensive as learning the statistics of the human language well enough to parrot most of human knowledge?

Comment by Hastings (hastings-greer) on On A List of Lethalities · 2023-02-25T03:04:19.891Z · LW · GW

Yeah. I suspect this links to a pattern I've noticed- in stories, especially rationalist stories, people who are successful at manipulation or highly resistant to manipulation are also highly generally intelligent. In real life, people who I know who are extremely successful at manipulation and scheming seem otherwise dumb as rocks. My suspicion is that we have a 20 watt, 2 exaflop skullduggery engine that can be hacked to run logic the same way we can hack a pregnancy test to run doom

Comment by Hastings (hastings-greer) on The idea that ChatGPT is simply “predicting” the next word is, at best, misleading · 2023-02-20T19:57:03.738Z · LW · GW

Assuming that it was fine tuned with RLHF (which OpenAI has hinted at with much eyebrow wiggling but not to my knowledge confirmed) then it does have some special sauce. Roughly,

- if it's at the beginning of a story, 

-and the base model predicts ["Once": 10%, "It": 10%, ...  "Happy": 5% ...] 

-and then during RLHF, the 10% of the time it starts with "Once" it writes a generic story and gets lots of reward, but when it outputs "Happy"  it tries to write in the style of Tolstoy and bungles it, getting little reward

=> it will  update to output Once more often in that situation. 

The KL divergence between successive updates is bounded by the PPO algorithm, but over many updates it can shift from ["Once": 10%, "It": 10%, ...  "Happy": 5% ...] to ["Once": 90%, "It": 5%, ...  "Happy": 1% ...] if the final results from starting with Once are reliably better.

 It's hard to say if that means it's planning to write a generic story because of an agentic desire to become a hack and please the masses, but certainly it's changing its output distribution based on what happened many tokens in the future

Comment by Hastings (hastings-greer) on Religion is Good, Actually · 2023-02-10T04:54:29.872Z · LW · GW

I think there is useful signal for you that the entire comments section is focused on the definition of a word instead of reconsidering whether specific actions or group memberships might be surprisingly beneficial. This is a property of the post, not the commenters. I suspect the issue is that people already emotionally reacted to the common definition of the word Religion in the title, before you had a chance to redefine it in the body.
The redefinition step is not necessary either- the excellent "Exercise is good" and "Nice clothes are good" posts used the common definitions of Exercise and Nice clothes throughout. 

Comment by Hastings (hastings-greer) on What fact that you know is true but most people aren't ready to accept it? · 2023-02-04T08:33:00.480Z · LW · GW

We’ve got a bit of a selection bias: anything that modern medicine is good at treating (smallpox, black plague, scurvy, appendicitis, leprosy, hypothyroidism, deafness, hookworm, syphillis) eventually gets mentally kicked out of your category “things it deals with” since doctors don’t have to spend much time dealing with them.

Comment by Hastings (hastings-greer) on Jordan Peterson: Guru/Villain · 2023-02-03T15:07:18.123Z · LW · GW

So i have not actually watched any jordan perterson videos, only been told what to believe about him by left wing sources. Your post gave me a distinctly different impression than I got from them! I decided to suppress my gut reaction and actually see what he had to say.

To get a less biased impression of him, I picked a random video on his channel and scrolled to the middle of the timeline. The very first line was "Children are the sacrificial victims of the trans ideology."

What are the odds of that?

Comment by Hastings (hastings-greer) on Summary of a new study on out-group hate (and how to fix it) · 2022-12-05T02:44:45.994Z · LW · GW

I think this is complicated by the reality that money given to the parties isn't spent directly on solving problems, but on fighting for power. The opinion that "the political parties should have less money on average, and my party should have relatively more money than their party" seems eminently reasonable to me. 

Comment by Hastings (hastings-greer) on Signals of war in August 2021 · 2022-10-26T17:37:20.564Z · LW · GW
  1. Who drew this connection in August / October 2021? I haven't found anything but would love to update on these people's current analysis of events.

Notably, in August 2021 the US, without a great deal of preparation and at great political cost, pulled out of Afghanistan. 
 

Motivation: in the event of a Ukraine-Russia war, the US would be diplomatically embarassed if Russia could point to an ongoing war in Afghanistan as "equivalent" to their invasion. In addition, a war in Afghanistan would serve as a distraction to US armed forces.


Counterpoints: The US had signalled before that they intended to pull out at this date, in the previous administration's negotiations with the Taliban. Also, Russia could just claim that their invasion was equivalent to the US's past invasions of Afghanistan and Iraq, even if they were ended. 

Comment by Hastings (hastings-greer) on Untapped Potential at 13-18 · 2022-10-19T16:45:21.279Z · LW · GW

One possibility: I suggest that with decent schooling, the kids who could start working professionally at 14 can instead be doubling their productivity every year, so there is a benefit to working on building talent directly before trying to extract outputs- exploration vs exploitation.

My public school was beyond good to me, and so I was learning math as fast as I could from the age of 11 to 21, commuting to the local university for my last two years of high school for multivariable calc, diff eq, linear algebra, and discrete math at the university, then taking a mix of undergraduate and graduate math during college. During highschool I also spent some time working at a lab at the university. The time I spent working in the lab was valuable 99% as a learning experience to 1% actually pushing science- the crux of my actual contribution was a single pull request to matplotlib that took months and months to craft, which would take me around a day today. My work in medical imaging that takes years now would take infinity time without 10 years of math classes behind me.

The question then is, is working on real adult goals a better proxy task for learning than the typical gifted highschooler fare of unproductive projects, contests and tests. I'd guess that as proxy goals, contests > self chosen projects >> real productive work >> school assigned projects > tests.

Comment by Hastings (hastings-greer) on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-09-08T17:54:34.297Z · LW · GW

With a grain of salt: for 2 million years there were various species of homo dotting africa, and eventually the world. Then humans became generally intelligent, and immediately wiped all of them out. Even hiding on an island in the middle of an ocean and specializing biologically for living on small islands was not enough to survive.