Posts

Beckeck's Shortform 2022-04-01T17:30:24.545Z

Comments

Comment by Beckeck on Principles for the AGI Race · 2024-09-03T21:48:09.397Z · LW · GW

Top post claims that while principle one (seek broad accountability) mightbe useful in a more perfect world, but that here in reality it doesn't work great.

Reasons include that the pressure to be held in high standards by the Public tend to cause orgs to Do PR, rather then speak truth.

Comment by Beckeck on CFAR Takeaways: Andrew Critch · 2024-02-14T17:43:47.784Z · LW · GW

know " sentence needs an ending 

 

Comment by Beckeck on On ‘Responsible Scaling Policies’ (RSPs) · 2023-12-05T17:34:54.965Z · LW · GW
Comment by Beckeck on On ‘Responsible Scaling Policies’ (RSPs) · 2023-12-05T17:32:17.259Z · LW · GW

"ARC (they just changed names to METR, but I will call them ARC for this post)" ---almost but not quite 

-- ARC Evals (the evaluation of frontier models people, led by Beth Barnes, with Paul on board/ advising) has become METR,  ARC (alignment research center, doing big brain math and heuristic arguments, led by Paul) remains ARC. 

Comment by Beckeck on Bring back the Colosseums · 2023-09-27T00:35:01.205Z · LW · GW

" football, hockey, rugby, boxing, kick-boxing and MMA to be amongst the worst sports for this stuff." - - I'm not up to date on the current literature but I'm pretty sure this list is rather wrong. I don't remember all the details of the study I do remember (and I don't have time for a lit review) but in it women's high school soccer actually had the highest concussion rate (idk if it was per participant season or hour or per game minute or...).

Comment by Beckeck on Find Hot French Food Near Me: A Follow-up · 2023-09-07T20:22:55.944Z · LW · GW

I accept that mayonaise is an evolution of allioli (but maintain that the historical fact is that its american ubiquity routes through french chefs). 

Wikipedia also doesn't say that it's not a mother sauce, if you scroll down you'll find this: 
"Auguste Escoffier wrote that mayonnaise was a French mother sauce of cold sauces,[27] like 'espagnole or velouté. "
I originally wrote "controversially a mother sauce" because the most common listing of mother sauces on the internet is ~wrong- The youtube video i linked includes primary source scholarship on the topic that has begun to update the general understanding in the direction that the quote supports.  
 


 

Comment by Beckeck on Find Hot French Food Near Me: A Follow-up · 2023-09-07T01:17:34.419Z · LW · GW


This is as much a nitpick with Zvi's article as with this one, but french food just seems hard to find because its easy to misidentify.  french technique is the bedrock of american food - both as the history of fine dining(/haute cuisine)  routes directly through french chefs, restaurants, systems, and techniques and as french food has been repurposed into american food.  
Some examples: mayonnaise, the delicate, challenging-to-make emulsion of flavored fats and vinegars,controversially a mother sauce* becomes 'mayo' the white stuff that goes on sandwiches; charcuterie becomes the deli isle;  boeuf bourguignon becomes stew.  

so in your example you can probably (haven't researched the restaurant, but from general knowledge as a processional chef) count at least the "new american" restaurant as french as "new american" is the (new(ish)) American take on a fine dining tradition that comes from france. 'Chef' just means 'chief' in french (like the military rank or the man in charge) and comes from the brigade system (https://en.wikipedia.org/wiki/Brigade_de_cuisine)  




((https://en.wikipedia.org/wiki/French_mother_sauces)  https://www.youtube.com/watch?v=tcDk-JcAnOw )

Comment by Beckeck on Hell is Game Theory Folk Theorems · 2023-05-06T16:37:03.701Z · LW · GW

If it's the case that the game theory here is correct I'm sad why it can't be simply explained as such, if the game theory here isn't correct I'm sad it's curated.

Comment by Beckeck on AI #7: Free Agency · 2023-04-13T22:19:27.797Z · LW · GW

plus one for "stop worrying about what people will say in response so much, get the actual information out there, stop being afraid."

see also Anna Salamon's takes on 'not doing PR'  that someone else might find and link? 
 

Comment by Beckeck on 2+2=π√2+n · 2023-02-03T22:44:37.569Z · LW · GW

↑ is not ^ 
https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation 

Comment by Beckeck on Movie Review: Megan · 2023-01-23T18:09:13.739Z · LW · GW

"Gemma think that objective function and its implications through. At all.2"


*doesn't

 

Comment by Beckeck on How my team at Lightcone sometimes gets stuff done · 2022-09-19T18:34:38.243Z · LW · GW

given this notional use case (and the relative inexperience of the implied user), I think its even more important to (as Gunnar mentioned) contextualize this advice as to whom its for, and how they should use it. 

doing that properly would take more than i have for this at the moment, but i'd appreciate epistemic tagging regarding things like; 
this only could work at a new/small scale (for reasons including because the cost of keeping everyone 100% context scales with org size and because benefits don't)
that strategy has to fit the employees you have, and this sort of strategy constrains the type of person you can hire to those who would fit it (which is a cost to be considered, not a fatal flaw).
 

Comment by Beckeck on ITT-passing and civility are good; "charity" is bad; steelmanning is niche · 2022-07-07T23:31:29.695Z · LW · GW

I think that steelmanning a person is usually a bad idea, rather one should steelman positions (when one cares about the matter to which the positions are relevant).

I claim this avoids a sufficient swath of the OP's outlined problems of steelmanning for the articles claim of 'nicheness', and that the semi tautology of 'appropriate steelmanning is appropriate' more accurately maps reality. 

also:
"The problem isn't 'charity is a good conversational norm, but these people are doing it wrong'; the problem is that charity is a bad conversational norm. If nothing else, it's bad because it equivocates between 'be friendly' norms and 'have accurate beliefs about others' norms."
here we can see a bad use case for steelmanning (having accurate beliefs about others) which makes me wonder if its not a question of doing it wrong? (contra to OP).  I also notice that i think most people should have less conversations about what people think, and more conversations about what is true (where steelmanning becomes again more relevant), and wonder where you fall (because such a thing might be upstream?).  


I also am apparently into declaratives today. 

(meta: written without much rigor or edits rather then unwritten, ) 

Comment by Beckeck on My cognitive inertia cycle · 2022-06-26T17:01:39.484Z · LW · GW

First, Writing things so you know them seems valuable.

Second, Fwiw In my struggles with depression, I've found physical habits to be the easiest route to something better. When you don't know what to do but need to do something, go for a walk/hike/jog and let your brain sync up with your body a little, burn some calories to regain some hunger, and deserve some the tiredness you may already feel.

Comment by Beckeck on The best 'free solo' (rock climbing) video · 2022-06-18T17:43:33.839Z · LW · GW

Good video, even if I'm don't quite agree with the superlative. I suspect that the festival film this video is about (this YouTube is a Pete Whitaker behind the scenes https://youtu.be/pDCSzC7PJBg) will be better, and also I'm excited for the full 3d/ vr films that should be coming out soon (here is Alex honold doing the behind the scenes thing https://youtu.be/dy4jGZ--gre)

Comment by Beckeck on Why all the fuss about recursive self-improvement? · 2022-06-14T23:06:38.449Z · LW · GW

I think clever duplication of human intelligence is plenty sufficient for general superhuman capacity in the important sense  (wherein I mean something like 'it has capacities such that would be extincion causing if (it believes) minimizing its loss function is achieved by turning off humanity (which could turn it off/ start other (proto-)agis)').

for one,  I don't think humanity is that robust in the status quo, and 2, a team of internally aligned (because copies) human level intelligence capable of graduate level biology seems plenty existentially scary.

Comment by Beckeck on [NSFW Review] Interspecies Reviewers · 2022-04-01T17:37:49.788Z · LW · GW

typo: "Gugguk" should be "Gigguk" 

Comment by Beckeck on Beckeck's Shortform · 2022-04-01T17:30:25.071Z · LW · GW

Hi, I'm Beck. For credibility, Eliezer once said I was chosen by the Food Gods. The following is on the pareto frontier of delicious, easy condiments and is robust to change (but not the most healthy). 
Chipotle Mayo:
(<5 min prep)
ingredients: 
2 cups good quality mayo (Hellman's is a classic)
1-2 minced chipotles in adobo *
1-2 pinch (ground toasted) cumin 
1/2 teaspoon (smoked) paprika 

1. combine ingredients
2. eat with things (potatoes, roast vegetables, meats...)  


* small cans are available in the hispanic section of most US grocery stores, use a couple of chilis (and their sauce) at a time then keep the rest in the fridge for whatever you want to be made delicious, smokey and spicy (maybe more mayo))

Comment by Beckeck on 12 interesting things I learned studying the discovery of nature's laws · 2022-02-21T03:49:58.343Z · LW · GW

some places to look (with hope that others might add theirs): 
Moneyball (the book, the movie lacks detail but gets some of the spirit)
fivethirtyeight's methodology articles on their various sports/+ models (https://fivethirtyeight.com/features/how-our-raptor-metric-works/ 
https://fivethirtyeight.com/features/how-fivethirtyeight-2020-primary-model-works/)
probably a bunch of articles from grantland (which is archived but available, but i lack titles off the top of my head)
https://en.wikipedia.org/wiki/Sports_analytics
zvi's sports betting articles 
 

Comment by Beckeck on 12 interesting things I learned studying the discovery of nature's laws · 2022-02-20T22:18:02.225Z · LW · GW

in lieu of writing nothing instead, informally -
hey, good list! i wonder if you've read much of the recent history of sabermetrics, which to me is the modern equivalent (in that it's a history of bunch of nerds and some people who wanted to be rich who actualized statistical modeling at the frontier of the applied science)?

Comment by Beckeck on Competence/Confidence · 2021-11-20T21:16:22.608Z · LW · GW

I barely assed the exercise but like/liked this.
For (graph comfortable) me i found  the graphs* to cleanly get me to some subset of the relevant frames/questions/narratives.

if i'd written the post, a version of the final chart would have been first (or at least near).  i have various other thoughts on that level which feel both like quibbles and like significant differences in how you and i (or an agent and another agent) carryout the moment to moment cognition of an attempting to be rational agent.  

 



*except the green bar graph, which i could just guess at meanings for.

Comment by Beckeck on Attempted Gears Analysis of AGI Intervention Discussion With Eliezer · 2021-11-15T18:46:22.378Z · LW · GW

not an expert, but I think life is an existence proof for the power of nanotech, even if the specifics of a grey goo scenario seem less than likely possible. Trees turn sunlight and air into wood, ribosomes build peptides and proteins, and while current generation models of protein folding are a ways from having generative capacity, it's unclear how many breakthroughs are between humanity and that general/generative capacity.  
 

Comment by Beckeck on Speaking of Stag Hunts · 2021-11-07T00:10:04.916Z · LW · GW

Jumping in here in what i hope is a prosocial way. I assert as hypothesis that the two of you currently disagree about what level of meta the conversation is/should-be at, and each feels that the other has an obligation to meet them at their level, and this has turned up the heat a lot. 

maybe there is a more oblique angle then this currently heated one?

Comment by Beckeck on An Unexpected Victory: Container Stacking at the Port of Long Beach · 2021-10-28T21:53:16.286Z · LW · GW

to confirm what Vitor said, it's the logistics companies not the port that had a rules change: 
         "The rule change does not apply to terminals at the Port of Long Beach, which routinely stack containers up to six high. Many media reports over the weekend didn’t make a distinction between the port and inland zone, making it appear the port had new authority to increase vertical storage."
- https://www.freightwaves.com/news/city-of-long-beach-allows-logistics-companies-to-stack-containers-higher
 

Comment by Beckeck on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T23:09:10.691Z · LW · GW

link to the essay if/when you write it? 

Comment by Beckeck on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T02:51:43.076Z · LW · GW

Yes, but not on the blog https://thezvi.wordpress.com/

Comment by Beckeck on Problems in AI Alignment that philosophers could potentially contribute to · 2019-08-30T20:37:43.195Z · LW · GW

I’m concerned with this list because it doesn’t distinguish between the epistemological context of the questions provided. For the purpose of this comment there are at least three categories.

First:
Broadly considered completed questions with unsatisfactory answers. These are analogous to Godelian incompleteness and most of this type of question are subforms of the general question: do you/we/does one have purpose or meaning, and what does that tell us about the decisions you/we/one should make?

This question isn’t answered in any easy sense but I think most naturalist philosophers have reached the conclusion that this is a completed area of study. Namely- there is no inherent purpose or meaning to life, and no arguments for any particular ethical theory are rigorous enough, or could ever be rigorous enough, to conclude that it is Correct. That said we can construct meaning that is real to us, we just have to be okay with it being turtles all the way down.

I realize that these prior statements are considered bold by many, but I think it is A, True and B, that conclusion isn’t the point of this post, but used to illustrate the epistemological category.

Second:
Broadly considered answered questions with unsatisfactory answers that have specific fruitful investigations. These are analogous to p=np problems in math/computation. By analogous to p=np, i mean it’s a question where you’ve more or less come to a conclusion, and it's not the one we were hoping for, but there is still work by the hopeful that they will “solve” it generally, and there is certainty that there are valuable sub areas where good work can lead to powerful results.

3 other questions.

The distinction between categories one and two is conceptually important because category one is not all that productive to mine and asking for it be mined leads to sad miners, either because they aren’t productive mining and that was their goal, or because they become frustrated with management for asking silly questions.

I’m afraid that much of the list is from category one (my comments in bold):

    • Decision theory for AI / AI designers (category two)
      • How to resolve standard debates in decision theory?
      • Logical counterfactuals
      • Open source game theory
      • Acausal game theory / reasoning about distant superintelligences
    • Infinite/multiversal/astronomical ethics (?)
      • Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?
      • What kinds of (e.g. spatial-temporal) discounting is necessary and/or desirable?
    • Fair distribution of benefits (category one if philosophy, category 2 if policy analysis)
      • How should benefits from AGI be distributed?
      • For example, would it be fair to distribute it equally over all humans who currently exist, or according to how much AI services they can afford to buy?
      • What about people who existed or will exist at other times and in other places or universes?
    • Need for "metaphilosophical paternalism"?
      • However we distribute the benefits, if we let the beneficiaries decide what to do with their windfall using their own philosophical faculties, is that likely to lead to a good outcome?
    • Metaphilosophy (category one)
      • What is the nature of philosophy?
      • What constitutes correct philosophical reasoning?
      • How to specify this into an AI design?
    • Philosophical forecasting (category 2 if policy analysis)
      • How are various AI technologies and AI safety proposals likely to affect future philosophical progress (relative to other kinds of progress)?
    • Preference aggregation between AIs and between users
      • How should two AIs that want to merge with each other aggregate their preferences?
      • How should an AI aggregate preferences between its users?
    • Normativity for AI / AI designers (category 1)
      • What is the nature of normativity? Do we need to make sure an AGI has a sufficient understanding of this?
    • Metaethical policing (?)
      • What are the implicit metaethical assumptions in a given AI alignment proposal (in case the authors didn't spell them out)?
      • What are the implications of an AI design or alignment proposal under different metaethical assumptions?
      • Encouraging designs that make minimal metaethical assumptions or is likely to lead to good outcomes regardless of which metaethical theory turns out to be true.
      • (Nowadays AI alignment researchers seem to be generally good about not placing too much confidence in their own moral theories, but the same can't always be said to be true with regard to their metaethical ideas.)