hereisonehand's Shortform
post by hereisonehand · 2019-08-24T02:50:40.464Z · LW · GW · 20 commentsContents
20 comments
20 comments
Comments sorted by top scores.
comment by hereisonehand · 2019-09-12T17:43:57.301Z · LW(p) · GW(p)
I keep seeing these articles about the introduction of artificial intelligence/data science to football and basketball strategy. What's crazy to me is that it's happening now instead of much much earlier. The book Moneyball was published in 2003 (the movie in 2011) spreading the story of how use of statistics changed the game when it came to every aspect of managing a baseball team. After reading it, I and many others thought to ourselves "this would be cool to do in other sports" - using data would be interesting in every area of every sport (drafting, play calling, better coaching, clock management, etc). But I guess I assumed - if I thought of it, why wouldn't other people?
It's kind of a wild example of the idea that "if something works a little, you should do more of it and see if it works a lot, and keep doing that until you see evidence that it's running out of incremental benefit." My assumption that the "Moneyball" space was saturated back in 2011 was completely off given that in the time between 2011 and now, one could have trained themselves from scratch in the relevant data science methods and pushed for such jobs (my intuition is that 8 years of training could get you there). So, it's not even a "right place, right time" story given the timeline. It's just - when you saw the obvious trend, did you assume that everyone else was already thinking about it, or did you jump in yourself?
Replies from: gworley, JustMaier↑ comment by Gordon Seidoh Worley (gworley) · 2019-09-16T20:02:12.134Z · LW(p) · GW(p)
Part of the problem was that doing the work to apply those insights and doing so in a way that beats trained humans is hard because until recently those models couldn't handle all the variables and data humans could and so ignored many things that made a difference. Now that more data can be fed into the models they can make the same or better predictions that humans can make and thus stand a chance of outperforming them rather than making "correct" but poorly-informed decisions that, in the real world, would have lost games.
↑ comment by JustMaier · 2019-09-13T04:11:18.434Z · LW(p) · GW(p)
Interesting conclusion. It sounds like the bystander effect. I wonder how many big ideas don't get the action they deserve because upon hearing it we assume it's already getting the effort/energy it deserves and that there isn't room for our contribution.
Replies from: hereisonehand, hereisonehand↑ comment by hereisonehand · 2019-09-13T08:17:12.140Z · LW(p) · GW(p)
Another weird takeaway is the timeline. I think my intuition whenever I hear about a good idea currently happening is that because it's happening right now, it's probably too late for me to get in on it at all because everyone already knows about it. I think that intuition is overweighted. If there's a spectrum from ideas being fully saturated to completely empty of people working on them, when good ideas break in the news they are probably closer to the latter than I give them credit for being. At least, I need to update in that direction.
Replies from: JustMaier↑ comment by JustMaier · 2019-09-13T16:58:18.918Z · LW(p) · GW(p)
I think this is caused by the fact that we lack tooling to adequately assess the amount of free-energy available in new markets sparked by new ideas. Currently it seems the only gauge we have is media attention and investment announcements.
Taking the time to assess an opportunity is operationally expensive and I think I've optimized to accept that there's probably little opportunity given that everyone else is observing the same thing. However, I'm not sure that it makes sense to adjust my optimization without first increasing my efficiency in assessing opportunities.
↑ comment by hereisonehand · 2019-09-13T08:11:25.078Z · LW(p) · GW(p)
Ya, it's interesting because it was a "so clearly a good idea" idea. We tend to either dismiss ideas as bad because we found the fatal flaw or think "this idea is so flawless it must've been the lowest hanging fruit and thus have already been picked."
Another example that comes to mind is checklists in surgery. Gawande wrote the book "checklist manifesto" with his findings that a simple checklist dramatically improved surgical outcomes back in 2009. I wonder if the "maybe we should try to make some kind of checklist-ish modification to how we approach everything else in medicine" thought needs similar action.
comment by hereisonehand · 2021-04-19T16:10:20.110Z · LW(p) · GW(p)
Did you all see this? https://twitter.com/SquishChaos/status/1383435339910418432?s=20
Basically, claiming in the next 12 months ethereum will undergo the supply shock equivalent of 3 bitcoin halving events. Curious if rationalists see a flaw with the reasoning or are already ahead of this
Replies from: wunan↑ comment by wunan · 2021-04-21T00:52:47.035Z · LW(p) · GW(p)
There's a discord for Crypto+Rationalists you may be interested in if you're not already aware: https://discord.gg/3ZCxUt8qYw
comment by hereisonehand · 2019-09-12T01:54:27.896Z · LW(p) · GW(p)
I have been watching this video https://www.youtube.com/watch?v=EUjc1WuyPT8 on AI alignment (something I'm very behind on, my apologies) and it occurred to me that one aspect of the problem is finding a concrete formalized solution to Goodhart's law-styled problems? Like Yudkowsky was talking about ways that an AGI optimized towards making smiles could go wrong (namely, the AGI could find smarter and smarter ways to effectively give everyone heroin to quickly create lasting smiles) - and it seems like one aspect of this problem is that the metric "smiles" is a measurement for this ambiguous target "wellbeing," and so when the AGI gives us heroin to make us smile we go "well no, that isn't what we meant when we said wellbeing." So we're trying to find a way to formally write an algorithm for pursuing what we actually mean by wellbeing in a lasting and durable way rather than an algorithm that gets caught optimizing metric that previously measured wellbeing before they were optimized so much. I get that the problem of AI alignment has more facets than just that, but it seems like finding an effective way to tell an AI what wellbeing is rather than telling it things that are currently metrics of wellbeing usually (like smiles) is one facet.
Is this in fact a part of the AI alignment problem, and if so is anyone trying to solve this facet of the problem and where might I go to read more about that? I've been sort of interested in meta-ethics for a while, and solving this facet of the problem seems remarkably related to solving important problems in meta-ethics.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-09-12T02:32:06.263Z · LW(p) · GW(p)
Is this in fact a part of the AI alignment problem, and if so is anyone trying to solve this facet of the problem and where might I go to read more about that?
Yes, it's part of some approaches to the AI alignment problem. It used to be considered more central to AI alignment until people started thinking it might be too hard, and started working on other ways of trying to solve AI alignment that perhaps don't require "finding an effective way to tell an AI what wellbeing is". See AI Safety "Success Stories" [LW · GW] where "Sovereign Singleton" requires solving this and the others don't (at least not right away). See also Friendly AI and Coherent Extrapolated Volition.
comment by hereisonehand · 2019-08-24T02:50:40.603Z · LW(p) · GW(p)
The fact that Amazon rainforest produces 20% atmospheric oxygen (I read this somewhere, hope this isn't fiction) should be a bigger political piece than it seems to be. Seems like brazil could be leveraging this further on a global stage (have other countries subsidize the cost of maintaining the rainforests, preventing deforestation as we all benefit from/need the CO2 to oxygen conversion)? Also, would other countries have a tree-planting supply race to eliminate dependence on such a large source of oxygen from any one agent?
Just a strange thought that occurred to me this morning. Obviously doesn't reflect unfortunate realities of current politics (it isn't much of a political piece if the other agents don't believe it's real), but it occurred to me as a alternative politics that might transpire in a world where everyone took climate change seriously as an existential threat.
Replies from: timothy-underwood, ChristianKl↑ comment by Timothy Underwood (timothy-underwood) · 2019-08-25T13:29:35.802Z · LW(p) · GW(p)
Just as a first note, the atmosphere is about 20% oxygen, the fraction of that which gets turned over by plants in any given year is by comparison tiny, tiny, tiny, tiny. For reference, CO2, which all of that oxygen released by the Amazon is made from, is .04% of the atmosphere. You could kill every plant on earth, and we'd probably have enough oxygen for all of the animals to survive for longer than mankind has existed -- though you would get carbon dioxide poisoning at a much earlier point. But if the only CO2 was coming from respiration, it would still probably take tens of thousands of years before the concentration was dangerous.
The earth running out of oxygen is literally not something that anyone should worry about. This has a relationship to why a lot of people don't treat climate change as a serious existential threat. Perhaps there is some reason to think that it is that I haven't heard -- but most of the claims that it is super dangerous that I hear are like 'if the Amazon burns down, we'll all run out of oxygen and suffocate' -- and that simply won't happen.
Replies from: hereisonehand↑ comment by hereisonehand · 2019-08-25T15:55:39.339Z · LW(p) · GW(p)
Thanks for helping me get informed. I was under the impression that (and this is a separate thread) planting trees was a viable initiative to fight climate change, and by extension the survival of the amazon rainforest was a significant climate change initiative? I guess I'm wondering if along these lines as well, if climate change is important on the world stage, then the health of the rainforest would be as well?
**Thanks for correcting me about the oxygen consumption line - that is what I said and it was misguided
Replies from: ChristianKl↑ comment by ChristianKl · 2019-08-26T10:39:12.057Z · LW(p) · GW(p)
"Viable initiative" is concept that isn't useful. What we care about are "effective strategies". Whether or not planting trees is effective depends on how much it costs and how much other strategies cost.
Replies from: mr-hire
↑ comment by Matt Goldenberg (mr-hire) · 2019-08-26T13:18:50.015Z · LW(p) · GW(p)
Oftentimes when an idea seems crazy, the first step is a quick back of the napkin viability assessment.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-08-26T15:03:11.129Z · LW(p) · GW(p)
You are right that there are contexts where viability is a useful notion. It just isn't here.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-08-26T15:10:03.808Z · LW(p) · GW(p)
I think its' useful in exactly the context hereisonhand is asking about it "Hey guys I had this crazy idea, could it work at all?" is definitely both a legitimate and useful question, before saying "OK, we've determined it could work, does it make sense as a strategy to try."
I think it's dangerous to say "Don't ask about the viability of your crazy ideas on LW, because we only care about the relative value of ideas." I know this is in fact not what you were saying, but I could easily see my S1 interpreting this that way if this was the response I got to one of my first posts on the site.
Replies from: ChristianKl↑ comment by ChristianKl · 2019-08-26T15:57:57.735Z · LW(p) · GW(p)
Planting trees for the sake of the environment is not a crazy idea. It's a mainstream idea that's held by many people. You can buy a beer and in the process support protecting space in the rainforest.
hereisonhand spoke about him being able to extend that idea directly into "survival of the amazon rainforest was a significant climate change initiative".
To me that suggests he sees viability as the same thing as it being a good action. It does look to me like reasoning about public interventions without the EA mental models that are needed in the context to reason well.
Replies from: hereisonehand
↑ comment by hereisonehand · 2019-08-27T01:02:45.875Z · LW(p) · GW(p)
Yup, it was a quick thought I put to page and I will quickly and easily concede that 1) my initial idea wasn't expressed very clearly, 2) the way it was expressed is best interpreted by a reader in a way that makes it non-sensicle ("what does it mean to say oxygen is produced" and I didn't really tie my initial writing to climate change in the way I wanted too so what am I even talking about), 3) even the way I clarified my idea later mixed some thoughts that really should be separated out (viable != effective), and 4) I have some learning to do in the area of EA mental models and reasoning about public interventions. Not my best work.
Reflection:
I'm messing around with shortform as a way to kinda throw ideas on a page. This idea didn't work out too well towards generating productive discussion as upon reflection, the idea wasn't super coherent, let alone pointing towards anything true. However, I got a lot more engagement than I expected, which points to something of value from the medium. I think the course forward is probably to 1) keep experimenting with short form because I gain something from having my incoherence pointed out to me and there's a chance I will be more coherent and useful in the future, and 2) maybe take 5 minutes to reread my shortform posts before I post them (just because it's short form doesn't mean it can be nonsense)
↑ comment by ChristianKl · 2019-08-26T10:08:40.012Z · LW(p) · GW(p)
What does that even mean? Oxygen doesn't get produced. What trees do is that they bind the carbon in CO2 and leave the oxygen molecules in the air.