Posts
Comments
Thanks for asking this question. I have come down with something -- I've been feeling increasingly bad for the last 4-5 days, since I took a train out of NYC. In retrospect, I should have done so earlier, or not done so at all, but hindsight is 20-20.
I'm not certain I've got C19, but I'm trying to take actions that would help me if I do.
--A lot of fluids, obviously.
--I'm taking vitamin C in large quantities. There's currently a clinical trial which is testing this; I don't know if it will work out, obviously, but I might as well. (https://clinicaltrials.gov/ct2/show/NCT04264533)
--Vitamin D, for much the same reason.
--I'm sleeping / resting / staying in bed, pretty much continuously.
--I have, of course, a oximeter -- so far, all my readings >= 94, usually 96-97, which seems fine. I might order a second in case that seems innacurate.
--I'm planning to get licorice-tea, and drink it in enormous quantities. This will, in fact, raise my blood pressure, but apparently licorice contains some antiviral agents. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4629407/)
--One thing that's difficult is that I'm noticing that it's getting harder to think. I'll need to make provision for my future inability to make good decisions.
If people have further ideas, I'd be interested in them. It's a frightening situation for me, although I'm reasonably healthy and in the 30-39 age bracket. I realize that some of the above actions have very small probability of substantially helping, but it's hard to dig up better ones, especially because so many of the drugs used to fight this require a prescription, and currently you cannot get a prescription until you're already basically at deaths door.
I thiiiiink this book makes some important mistakes, judging from a quick glance.
So, for instance -- he asks how much power the regular car-user consumes. He says that energy use per day per person is distance travelled per day, over distance per unit of fuel, times energy per unit of fuel. He plugs in numbers, gets 40 kWh / day / person. Significantly, he says that a liter of petrol (dude seems British) has about 10 kWh in it (which Google seems to confirm) and that a typical car gets 12 km / liter (ok, seems fair, haven't double-checked, whatever). So his figure of 40 kWh / day / person, implicitly involves a car gets 12 km / 10 kWh, or 1.2 km/kWh.
And later he uses these numbers, together with numbers that purport to show that if we covered all English roofs with solar panels you only get 5 kWh / day / person, leaving us with a significant shortfall of the 40 kWh / day / person. (http://www.withouthotair.com/c6/page_39.shtml)
But, um, here's the problem. Electric motors are waaaay more efficient than internal combustion. Wikipedia informs me that a Model S gets about 3 miles per kWh, according to the EPA, which converts to about 4.8 km / kWh. (This isn't a floor by any means. Model 3 looks like it is better, although not by a ground-breaking amount. And this is why electric cars have stupid-sounding [to me] numbers applied to them like "100 miles per gallon of gas equivalent".) So, anyhow, approximately 4x more efficient, which leaves us with notably smaller shortfall, although one that still means that solar panels are insufficient for our total energy needs for driving. Ah well, haven't looked into the numbers of how hard it is to get sufficient solar power. 4x off isn't thaaaaat bad for a Fermi estimate, I guess.
Anyhow, I might have made elementary mistakes in the above. The only reason I bothered with this comment was that I saw his figures that seemed to assume we'd require the same total energy for our electric cars as for our petrol ones, and I was like "That seems wwaaaaaaay" off. And even if I hadn't done the math I'd still have that overall impression.
Vis-a-vis selecting inputs freely: OpenAI also included a large dump of unconditioned text generation in their github repo.
Nice review, I enjoyed it. I read the books a while ago and it was good to see I'm not alone in seeing it as deeply conservative. As far as that goes, I wondered how much of that is sort of general Chinese attitude vs. non-Chinese attitude, and how much if it is unique to the author.
One thing that keeps bothering me about the book is I can't make sense of Wade.
Wade was the ideal swordholder, because he could stick to commitments. Is he supposed to be absolutely bound by them, though, and is that why he inexplicably obeys Cheng, because he agreed to in the past? That's a coherent notion of character, but it hardly feels explicable; it makes sense if Wade is an AI, but not really as a human. Or at least that's how I felt.
My overall impression looking at this is still more or less summed up by what Francois Chollet said a bit ago.
Any problem can be treated as a pattern recognition problem if your training data covers a sufficiently dense sampling of the problem space. What's interesting is what happens when your training data is a sparse sampling of the space -- to extrapolate, you will need intelligence.
Whether an AI that plays StarCraft, DotA, or Overwatch succeeds or fails against top players, we'd have learned nothing from the outcome. Wins -- congrats, you've trained on enough data. Fails -- go back, train on 10x more games, add some bells & whistles to your setup, succeed.
Some of the stuff Deepmind talks about a lot -- so, for instance, the AlphaLeague -- seems like a clever technique simply designed to ensure that you have a sufficiently dense sampling of the space, which would normally not occur in a game with unstable equilibria. And this seems to me more like "clever technique applicable to domain where we can generate infinite data through self-play" than "stepping stone on way to AGI."
That being said, I haven't yet read through all the papers in the blog postl, and I'd be curious what of them people think might be / definitely are potential steps towards actually engineering intelligence.
Fair. For (1), more than 50% because that was how they've been defining victories in these tournaments. For (2), no unplanned interventions -- i.e, it's fine if they want to drive it on a gravel driveway that they know the thing cannot handle, or fill it up at the supercharger because the car clearly cannot handle that, but in general no interventions because the car would potentially crash in a situation it (ostensibly) should handle. And for (3), meh, can beat the native scripted AI seems reasonable.
So if I understand you, for (1) you're proposing a "hard" attention over the image, rather than the "soft" differentiable attention which is typically meant by "attention" for NNs.
You might find interesting "Recurrent Models of Visual Attention" by DeepMind (https://arxiv.org/pdf/1406.6247.pdf). They use a hard attention over the image with RL to train where to attend. I found it interesting -- there's been subsequent work using hard attention (I thiiink this is a central paper for the topic, but I could be wrong, and I'm not at all sure what the most interesting recent one is) as well.
...you were an ancient being, with a mind vast and unsympathetic, concerned with all the events in the path of the light-cone, who has through some mistake been trapped in a smaller, duller mind, forgetting most of the wisdom natural to it, becoming encumbered by fleshy bounds, and who now must decide what to do with the potential it has left.
...the "you" listening to this was one of several complete agents inhabiting a body, each of which has their own plans, goals, and strategies, each of which jockeys for control over the actions of that body, and each of which can wage war or form alliances with each other to try gain more control over that body over the course of a lifetime?