Posts

Comments

Comment by Veedrac on Are we in an AI overhang? · 2021-04-07T22:31:42.213Z · LW · GW

Thanks, I did get the PM.

Comment by Veedrac on Are we in an AI overhang? · 2021-03-17T20:38:20.936Z · LW · GW

There's a lot worth saying on these topics, I'll give it a go.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-03-17T20:19:54.535Z · LW · GW

Also, our intuitions about the extent to which nobody has a good idea of how to make TAI might differ too.

To be clear I'm not saying nobody has a good idea of how to make TAI. I expect pretty short timelines, because I expect the remaining fundamental challenges aren't very big.

What I don't expect is that the remaining fundamental challenges go away through small-N search over large architectures, if the special sauce does turn out to be significant.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-27T19:13:11.352Z · LW · GW

Well I understand now where you get the 17, but I don't understand why you want to spread it uniformly across the orders of magnitude. Shouldn't you put the all probability mass for the brute-force evolution approach on some gaussian around where we'd expect that to land, and only have probability elsewhere to account for competing hypotheses? Like I think it's fair to say the probability of a ground-up evolutionary approach only using 10-100 agents is way closer to zero than to 4%.

I'm still not following the argument. [...] So e.g. when you have 3 OOMs more compute than the HBHL milestone

I think you're mixing up my paragraphs. I was referring here to cases where you're trying to substitute searching over programs for the AI special sauce.

If you're in the position where searching 1000 HBHL hypotheses finds TAI, then the implicit assumption is that model scaling has already substituted for the majority of AI special sauce, and the remaining search is just an enabler for figuring out the few remaining details. That or that there wasn't much special sauce in the first place.

To maybe make my framing a bit more transparent, consider the example of a company trying to build useful, self-replicating nanoscale robots using a atomically precise 3D printer under the conditions where 1) nobody there has a good idea of how to go about doing this, and 2) you have 1000 tries.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-27T15:47:10.728Z · LW · GW

It takes us about 17 orders of magnitude away from the HBHL anchor, in fact. Which is not very far, when you think about it. Divide 100 percentage points of probability mass evenly across those 17 orders of magnitude, and you get almost 6% per OOM, which means something like 4x as much probability mass on the HBHL anchor than Ajeya puts on it in her report!

I don't understand what you're doing here. Why 17 orders of magnitude, and why would I split 100% across each order?

I don't follow this argument. It sounds like double-counting to me

Read ‘and therefore’, not ‘and in addition’. The point is that the more you spend your compute on search, the less directly your search can exploit computationally expensive models.

Put another way, if you have HBHL compute but spend nine orders of magnitude on search, then the per-model compute is much less than HBHL, so the reasons to argue for HBHL don't apply to it. Equivalently, if your per-model compute estimate is HBHL, then the HBHL metric is only relevant for timelines if search is fairly limited.

I'm not sure I get the distinction between enabler and substitute, or why it is relevant here. The point is that we can use compute to search for the missing special sauce. Maybe humans are still in the loop; sure.

Motors are an enabler in the context of flight research because they let you build and test designs, learn what issues to solve, build better physical models, and verify good ideas.

Motors are a substitute in the context of flight research because a better motor means more, easier, and less optimal solutions become viable.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-27T12:44:41.929Z · LW · GW

Eventually the conclusion holds trivially, sure, but that takes us very far from the HBHL anchor. Most evolutionary algorithms we do today are very constrained in what programs they can generate, and are run over small models for a small number of iteration steps. A more general search would be exponentially slower, and even more disconnected from current ML. If you expect that sort of research to be pulling a lot of weight, you probably shouldn't expect the result to look like large connectionist models trained on lots of data, and you lose most of the argument for anchoring to HBHL.

A more standard framing is that ‘we can do trial-and-error on our AI designs’, but there we're again in a regime where scale is an enabler for research, moreso than a substitute for it. Architecture search will still fine-tune and validate these ideas, but is less likely to drive them directly in a significant way.

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:19:06.142Z · LW · GW

Short-term economic value: How lucrative will pre-AGI systems be, and how lucrative will investors expect they might be? What size investments do we expect?

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:18:51.087Z · LW · GW

Societal robustness: How robust is society to optimization pressure in general? In the absence of recursive improvement, how much value could a mildly superintelligent agent extract from society?

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:18:33.523Z · LW · GW

What is the activation energy for an Intelligence Explosion?: What AI capabilities are needed specifically for meaningful recursive self-improvement? Are we likely to hit a single intelligence explosion once that barrier is reached, or will earlier AI systems also produce incomplete explosions, eg. if very lopsided AI can recursively optimize some aspects of cognition, but not enough for generality?

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:18:15.853Z · LW · GW

Personability vs Abstractness: How much will the first powerful AI systems take on the traits of humans, versus to what extent will they be idealized, unbiased reasoning algorithms.

If the missing pieces of intelligence come from scaling up ML models trained on human data, we might expect a bias towards humanlike cognition, whereas if the missing pieces of intelligence come from key algorithmic insights, we might expect fewer parallels.

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:17:48.094Z · LW · GW

Forewarning: Will things go seriously wrong before they go irreversibly wrong?

Comment by Veedrac on Poll: Which variables are most strategically relevant? · 2021-01-27T02:17:30.768Z · LW · GW

Lopsidedness: Does AI risk require solving all the pieces, or does it suffice to have an idiot savant, that exceeds human capabilities in only some axes, while still underperforming in others?

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-26T23:19:18.450Z · LW · GW

Thanks, I think I pretty much understand your framing now.

I think the only thing I really disagree with is that “"can use compute to automate search for special sauce" is pretty self-explanatory.” I think this heavily depends on what sort of variable you expect the special sauce to be. Eg. for useful, self-replicating nanoscale robots, my hypothetical atomic manufacturing technology would enable rapid automated iteration, but it's unclear how you could use that to automatically search for a solution in practice. It's an enabler for research, moreso than a substitute. Personally I'm not sure how I'd justify that claim for AI without importing a whole bunch of background knowledge of the generality of optimization procedures!

IIUC this is mostly outside the scope of what your article was about, and we don't disagree on the meat of the matter, so I'm happy to leave this here.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-26T12:30:36.172Z · LW · GW

I am by no means an expert on fusion power, I've just been loosely following the field after the recent bunch of fusion startups, a significant fraction of which seem to have come about precisely because HTS magnets significantly shifted the field strength you can achieve at practical sizes. Control and instabilities are absolutely a real practical concern, as are a bunch of other things like neutron damage; my expectation is only that they are second-order difficulties in the long run, much like wing shape was a second-order difficulty for flight. My framing is largely shaped by this MIT talk (here's another, here's their startup).

I called that complexity term "Special sauce." I have not in this post argued that the amount of special sauce needed is small; I left open the possibility that it might be large.

I'm probably just wanting the article to be something it's not then!

I'll try to clarify my point about key variables. The real-world debate of short versus long AI timelines pretty much boils down to the question of whether the techniques we have for AI capture enough of cognition, that short-term future prospects (scaling and research both) end up capturing enough of the important ones for TAI.

It's pretty obvious that GPT-3 doesn't do some things we'd expect a generally intelligent agent to do, and it also seems to me (and seems to be a commonality among skeptics) that we don't have enough of a grounded understanding of intelligence to expect to fill in these pieces from first principles, at least in the short term. Which means the question boils down to ‘can we buy these capabilities with other things we do have, particularly the increasing scale of computation, and by iterating on ideas?’

Flight is a clear case where, as you've said, you can trade the one variable (power-to-weight) to make up for inefficiencies and deficiencies in the other aspects. I expect fusion is another. A case where this doesn't seem to be clearly the case is in building useful, self-replicating nanoscale robots to manufacture things, in analogy to cells and microorganisms. Lithography and biotech have given us good tools for building small objects with defined patterns, but there seems to be a lot of fundamental complexity to the task that can't easily be solved by this. Even if we could fabricate a cubic millimeter of matter with every atom precisely positioned, it's not clear how much of the gap this would close. There is an issue here with trading off scale and manufacturing to substitute for complexity and the things we don't understand.

‘Part 1: Extra brute force can make the problem a lot easier’ says that you can do this sort of trade for AI, and it justifies this in part by drawing analogy to flight. But it's hard to see what intrinsically motivates this comparison specifically, because trading off a motor's power-to-weight ratio for physical upness is very different to trading off a computer's FLOP rate for abstract thinkingness. I assumed you did this because you believed (as I do) that this sort of argument is general. Hence, a general argument should apply generally, so unless there's something special about fusion, it should apply there too. If you don't believe it's a general sort of argument, then why the comparison to flight, rather than to useful, self-replicating nanoscale robots?

If instead you're just drawing comparison to flight to say it's potentially possible that compute is fungible with complexity, rather than it being likely, then it just seems like not a very impactful argument.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-25T15:39:22.142Z · LW · GW

In the case of fusion, it certainly seems that control is a key variable, at least in retrospect -- since we've had temperature and pressure equal to the sun for a while.

To get this out of the way, I expect that fusion progress is in fact predominantly determined by temperature and pressure (and factors like that that go into the Q factor), and expect that issues with control won't seem very relevant to long-run timelines in retrospect. It's true that we've had temperature and pressure equal to the sun for a while, but it's also true that low-yield fusion is pretty easy. The missing piece to that cannot simply be control, since even a perfectly controlled ounce of a replica sun is not going to produce much energy. Rather, we just have a higher bar to cross before we get yield.

In fusion, you can use temperature and pressure to trade off against control issues. This is most clearly illustrated in hydrogen bombs. In fact, there is little in-principle reason you couldn't use hydrogen bombs to heat water to power a turbine, even if it's not the most politically or economically sensible design.

They need to argue that a.) X is probably necessary for TAI, and b.) X probably won't arrive shortly after the other variables are achieved. I think most of the arguments I am calling bogus cannot be rephrased in this way to achieve a and b, or if they can, I haven't seen it done yet.

While I've seen arguments about the complexity of neuron wiring and function, the argument has rarely been ‘and therefore we need a more exact diagram to capture the human thought processes so we can replicate it’, as much as ‘and therefore intelligence is likely to rely on a lot of specialized machinery and hardcoded knowledge.’

This argument refutes that in its naïve direct form, because, as you say, nature would add complexity irrespective of necessity, even for marginal gains. But if you allow for fusion to say, well, the simple model isn't working out, so let's add [miscellaneous complexity term], as long as it's not directly in analogy to nature, then why can't AI Longs say, well, GPT-3 clearly isn't capturing certain facets of cognition, and scaling doesn't immediately seem to be fixing that, so let's add [miscellaneous complexity term] too? Hence, ‘and therefore intelligence is likely to rely on a lot of specialized machinery and hardcoded knowledge.’

I don't think we necessarily disagree on much wrt. grounded arguments about AI, but I think if one of the key arguments (‘Part 1: Extra brute force can make the problem a lot easier’) is that certain driving forces are fungible, and can trade-off for complexity, then it seems like cases where that doesn't hold (eg. your model of fusion) would be evidence against the argument's generality. Because we don't really know how intelligence works, it seems that either you need to have a lot of belief in this class of argument (which is the case for me), or you need to be very careful applying it to this domain.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-25T11:30:00.515Z · LW · GW

OK, but doesn't this hurt the point in the post? Shortly's claim that the key variables for AI ‘seem to be size and training time’ and not other measures of complexity seems no stronger (and actually much weaker) than the analogous claim that the key variables for fusion seem to be temperature and pressure, and not other measures of complexity like plasma control.

If the point of the post is only to argue against one specific framing for introducing appeals to complexity, rather than advocate for the simpler models, it seems to lose most of its predictive power for AI, since most of those appeals to complexity can be easily rephrased.

Comment by Veedrac on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-01-24T20:23:42.566Z · LW · GW

The appeal-to-nature's-constants argument doesn't work great in this context because the sun actually produces fairly low power per unit volume. Nuclear fusion on Earth requires vastly higher power density to be practical.

That said, I think it is correct that temperature and pressure are the key factors. I just don't think the factors map on to the natural equivalents, as much as onto some physical equations that give us the Q factor.

In the context of the article, controlling the plasma is an appeal to complexity; if it turns out to be a rate limiter even after temperature and pressure suffice, then it would be evidence against the argument, but if it turns out not to matter that much, it would be evidence for.

Comment by Veedrac on Pseudorandomness contest: prizes, results, and analysis · 2021-01-17T05:21:58.846Z · LW · GW

I'll take fifth place in Round 1 (#44), given how little I thought of my execution XD. Debiasing algorithms work. I'm not convinced there's a real detected difference between the top Round 1 participants anyway; we are beating most random strings, and none of the top players thought it was more likely than not any of them were human.

My Round 2 performance was thoroughly middle of the pack, with a disappointing negative score. I didn't spend much effort on it and certainly didn't attempt calibration, so it's not a huge surprise I didn't win, but I still hoped for a positive score. What I am most surprised at is that four of my 0% scores were real (#8, #61, #121, #122). I was expecting one, maybe two (yes, yes, I already said ‘I didn't attempt calibration’) might be wrong, but four seems excessive. I can't really blame calibration for the mediocre performance, since my classification rate (60.5%) was also middle of the road, but I think I underestimated how much bang-for-the-buck I would have gotten from calibration, rather than working on the details.

Perhaps interestingly, someone who bet the mean % for every option (excluding self-guesses), with no weighting, would have scored 19.5 (drawn fourth place), or 19.8 post-squeeze, with a 64.5% classification rate. Even if you exclude everyone who scored 10 or more from that average, the average would have scored 14.5, or 15.8 post-squeeze, with (only) a 59.7% classification rate. So averaging out even a bunch of mediocre opinions seems to get you pretty decent, mostly-well-calibrated results.

Alternatively, someone who bet the weighted average from the column in the sheet, which is of course a strategy impossible to implement without cheating, would have scored 27.7, or 28.0 post-squeeze, with a 74.2% classification rate. So even that form of cheating wouldn't beat the Scy & William duo.

Comment by Veedrac on DALL-E by OpenAI · 2021-01-10T03:57:21.395Z · LW · GW

I expect getting a dataset an order of magnitude larger than The Pile without significantly compromising on quality will be hard, but not impractical. Two orders of magnitude (~100 TB) would be extremely difficult, if even feasible. But it's not clear that this matters; per Scaling Laws, dataset requirements grow more slowly than model size, and a 10 TB dataset would already be past the compute-data intersection point they talk about.

Note also that 10 TB of text is an exorbitant amount. Even if there were a model that would hit AGI with, say, a PB of text, but not with 10 TB of text, it would probably also hit AGI with 10 TB of text plus some fairly natural adjustments to its training regime to inhibit overfitting. I wouldn't argue this all the way down to human levels of data, since the human brain has much more embedded structure than we assume for ANNs, but certainly huge models like GPT-3 start to learn new concepts in only a handful of updates, and I expect that trend of greater learning efficiency to continue.

I'm also skeptical that images, video, and such would substantially change the picture. Images are very information sparse. Consider the amount you can learn from 1MB of text, versus 1MB of pixels.

Correlations among these senses gives rise to understanding causality.  Moreover,  human brains might have evolved innate structures for things like causality,  agency,  objecthood,  etc which don't have to be learned.

Correlation is not causation ;). I think it's plausible that agenthood would help progress towards some of those ideas, but that doesn't much argue for multiple distinct senses. You can find mere correlations just fine with only one.

It's true that even a deafblind person will have mental structures that evolved for sight and hearing, but that's not much of an argument that it's needed for intelligence, and given the evidence (lack of mental impairment in deafblind people), a strong argument seems necessary.

For sure I'll accept that you'll want to train multimodal agents anyway, to round out their capabilities. A deafblind person might still be intellectually capable, but it doesn't mean they can paint.

Comment by Veedrac on DALL-E by OpenAI · 2021-01-07T23:33:31.211Z · LW · GW

Audio,  video,  text,  images

While other media would undoubtedly improve the model's understanding of concepts hard to express through text, I've never bought the idea that it would do much for AGI. Text has more than enough in it to capture intelligent thought; it is the relations and structure that matters, above all else. If this weren't true, one wouldn't expect competent deafblind people, but there are. Their successes are even in spite of an evolutionary history with practically no surviving deafblind ancestors! Clearly the modules that make humans intelligent, in a way that other animals and things are not, are not dependent on multisensory data.

Comment by Veedrac on Will OpenAI's work unintentionally increase existential risks related to AI? · 2021-01-06T01:48:49.008Z · LW · GW

To the question, how do OpenAI's demonstrations of scaled up versions of current models affect AI safety?, I don't think much changes? It does seem that OpenAI is aiming to go beyond simple scaling, which seems much riskier.

As to the general question, certainly that news makes me more worried about the state of things. I know way too little about the decision to be more concrete than that.

Comment by Veedrac on Open & Welcome Thread - December 2020 · 2020-12-20T20:21:39.678Z · LW · GW

Thanks, I figured this wouldn't be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).

Comment by Veedrac on Open & Welcome Thread - December 2020 · 2020-12-20T16:13:42.107Z · LW · GW

Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is run twice simultaneously in lockstep, with the exact same parameterization and environment. Do these worlds have different moral values?

I ask because...

initially I would have said no, probably not, these are identically the same person, so there is only one instance actually there, but...

Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is also run once, but with the future having twice the probability mass. Do these worlds have different moral values?

to which the answer must surely be yes, else it's really hard to have coherent moral values under quantum mechanics, hence the contradiction.

Comment by Veedrac on What technologies could cause world GDP doubling times to be <8 years? · 2020-12-10T21:49:03.316Z · LW · GW

Do you expect pre-takeoff AI to provide this? What sort of AI and production capabilities are you envisioning?

Or are you answering this question without reference to AI? If so, what would make this useful for estimating AI timelines?

Comment by Veedrac on AGI Predictions · 2020-11-21T21:15:43.542Z · LW · GW

This is only true if, for example, you think AI would cause GDP growth. My model assigns a lot of probability to ‘AI kills everyone before (human-relevant) GDP goes up that fast’, so questions #7 and #8 are conditional on me being wrong about that. If we can last any small multiples of a year with AI smart enough to double GDP in that timeframe, then things probably aren't as bad as I thought.

Comment by Veedrac on AGI Predictions · 2020-11-21T11:54:59.794Z · LW · GW

To emphasize, the clash I'm perceiving is not the chance assigned to these problems being tractable, but to the relative probability of ‘AI Alignment researchers’ solving the problems, as compared to everyone else and every other explanation. In particular, people building AI systems intrinsically spend a degree of their effort, even if completely unconvinced about the merits of AI risk, trying to make systems aligned, just because that's a fundamental part of building a useful AI.

I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems.

I have a sort of Yudkowskian pessimism towards most of these things (policy won't actually help; Iterated Amplification won't actually work), but I'll try to put that aside here for a bit. What I'm curious about is what makes these sort of ideas only discoverable in this specific network of people, under these specific institutions, and particularly more promising than other sorts of more classical alignment.

Isn't Iterated Amplification in the class of things you'd expect people to try just to get their early systems to work, at least with ≥20% probability? Not, to be clear, exactly that system, but just fundamentally RL systems that take extra steps to preserve the intentionality of the optimization process.

To rephrase a bit, it seems to me that a worldview in which AI alignment is sufficiently tractable that Iterated Amplification is a huge step towards a solution, would also be a worldview in which AI alignment is sufficiently easy (though not necessarily easy) that there should be a much larger prior belief that it gets solved anyway.

Comment by Veedrac on AGI Predictions · 2020-11-21T09:54:55.753Z · LW · GW

There is a huge difference in the responses to Q1 (“Will AGI cause an existential catastrophe?”) and Q2 (“...without additional intervention from the existing AI Alignment research community”), to a point that seems almost unjustifiable to me. To pick the first matching example I found (and not to purposefully pick on anybody in particular), Daniel Kokotajlo thinks there's a 93% chance of existential risk without the AI Alignment community's involvement, but only 53% with. This implies that there's a ~43% chance of the AI Alignment community solving the problem, conditional on it being real and unsolved otherwise, but only a ~7% chance of it not occurring for any other reason, including the possibility of it being solved by the researchers building the systems, or the concern being largely incorrect.

What makes people so confident in the AI Alignment research community solving this problem, far above that of any other alternative?

Comment by Veedrac on The Colliding Exponentials of AI · 2020-11-01T10:39:53.498Z · LW · GW

On the other hand, improvements on ImageNet (the datasets alexnet excelled on at the time) itself are logarithmic rather than exponential and at this point seem to have reached a cap at around human level ability or a bit less (maybe people got bored of it?)

The best models are more accurate than the ground-truth labels.

Are we done with ImageNet?
https://arxiv.org/abs/2006.07159

Yes, and no. We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.

Figure 7. shows that model progress is much larger than the raw progression of ImageNet scores would indicate.

Comment by Veedrac on The Solomonoff Prior is Malign · 2020-10-25T08:53:31.695Z · LW · GW

I think this is wrong, but I'm having trouble explaining my intuitions. There are a few parts;

  1. You're not doing Solomonoff right, since you're meant to condition on all observations. This makes it harder for simple programs to interfere with the outcome.
  2. More importantly but harder to explain, you're making some weird assumptions of the simplicity of meta-programs that I would bet are wrong. There seems to be a computational difficulty here, in that you envision  small worlds trying to manipulate  other worlds, where . That makes it really hard for the simplest program to be one where the meta-program that's interpreting the pointer to our world is a rational agent, rather than some more powerful but less grounded search procedure. If ‘naturally’ evolved agents are interpreting the information pointing to the situation they might want to interfere with, this limits the complexity of that encoding. If they're just simulating a lot of things to interfere with as many worlds as possible, they ‘run out of room’, because .
  3. Your examples almost self-refute, in the sense that if there's an accurate simulation of you being manipulated at time , it implies that simulation is not materially interfered with at time , so even if the vast majority of Solomonoff inductions have attempted adversary, most of them will miss anyway. Hypothetically, superrational agents might still be able coordinate to manipulate some very small fraction of worlds, but it'd be hard and only relevant to those worlds.
  4. Compute has costs. The most efficient use of compute is almost always to do enact your preferences directly, not manipulate other random worlds with low probability. By the time you can interfere with Solomonoff, you have better options.
  5. To the extent that a program  is manipulating predictions so that another other program that is simulating  performs unusually... well, then that's just how the metaverse is. If the simplest program containing your predictions is an attempt at manipulating you, then the simplest program containing you is probably being manipulated.
Comment by Veedrac on I'm Voting For Ranked Choice, But I Don't Like It · 2020-09-20T20:59:51.349Z · LW · GW

IRV is an extremely funky voting system, but almost anything is better than Plurality. I very much enjoyed Ka-Ping Yee's voting simulation visualizations, and would recommend the short read for anyone interested.

I have actually made my own simulation visualization, though I've spent no effort annotating it and the graphic isn't remotely intuitive. It models a single political axis (eg. ‘extreme left’ to ‘extreme right’) with N candidates and 2 voting populations. The north-east axis of the graph determines the centre of one voting population, and the south-east axis determines the centre of the other (thus the west-to-east axis is when the voting populations agree). The populations have variances and sizes determined by the sliders. The interesting thing this has taught me is that IRV/Hare voting is like an otherwise sane voting system but with additional practically-unpredictable chaos mixed in, which is infinitely better than the systemic biases inherent to plurality or Borda votes. In fact, if you see advantages in sortition, this might be a bonus.

Comment by Veedrac on Where is human level on text prediction? (GPTs task) · 2020-09-20T16:44:40.948Z · LW · GW

Sources:

https://web.stanford.edu/~jurafsky/slp3/

https://www.isca-speech.org/archive/Interspeech_2017/abstracts/0729.html

The latter is the source for human perplexity being 12. I should note that it tested on the 1 Billion Words benchmark, where GPT-2 scored 42.2 (35.8 was for Penn Treebank), so the results are not exactly 1:1.

Comment by Veedrac on How Much Computational Power Does It Take to Match the Human Brain? · 2020-09-12T15:05:29.965Z · LW · GW

FLOPS don't seem to me a great metric for this problem; they are often very sensitive to the precise setup of the comparison, in ways that often aren't very relevant (the Donkey Kong comparison emphasized this), and the architecture of computers is fundamentally different to that of brains. What seems like a more apt and stable comparison is to compare the size and shape of the computational graph, roughly the tuple (width, depth, iterations). This seems like a much more stable metric, since scale-based metrics normally only change significantly when you're handling the problem in a semantically different way. In the example, hardware implementations of Donkey Kong and various sorts of software emulation (software interpreter, software JIT, RTL simulation, FPGA) will have very different throughputs on different hardware, and the setup and runtime overheads for each might be very different, but the actual runtime computation graphs should look very comparable.

This also has the added benefit of separating out hypotheses that should naturally be distinct. For example, a human-sized brain at 1x speed and a hamster brain at 1000x speed are very different, yet have seemingly similar FLOPS. Their computation graphs are distinct. Technology comparisons like FPGAs vs AI accelerators become a lot clearer from the computation graph perspective; an FPGA might seem at a glance more powerful from a raw OP/s perspective, but first principles arguments will quickly show they should be strictly weaker than an AI accelerator. It's also more illuminating given we have options to scale up at the cost of performance; from a pure FLOPS perspective, this is negative progress, but pragmatically, this should push timelines closer.

Comment by Veedrac on Forecasting Thread: AI Timelines · 2020-08-26T06:16:20.425Z · LW · GW

I disagree with that post and its first two links so thoroughly that any direct reply or commentary on it would be more negative than I'd like to be on this site. (I do appreciate your comment, though, don't take this as discouragement for clarifying your position.) I don't want to leave it at that, so instead let me give a quick thought experiment.

A neuron's signal hop latency is about 5ms, and in that time light can travel about 1500km, a distance approximately equal to the radius of the moon. You could build a machine literally the size of the moon, floating in deep space, before the speed of light between the neurons became a problem relative to the chemical signals in biology, as long as no single neuron went more than half way through. Unlike today's silicon chips, a system like this would be restricted by the same latency propagation limits that the brain is, but still, it's the size of the moon. You could hook this moon-sized computer to a human-shaped shell on Earth, and as long as the computer was directly overhead, the human body could be as responsive and fully updatable as a real human.

While such a computer is obviously impractical on so many levels, I find it a good frame of reference to think about the characteristics of how computers scale upwards, much like Feynman's There's Plenty of Room at the Bottom was a good frame of reference for scaling down, considered back when transistors were still wired by hand. In particular, the speed of light is not a problem, and will never become one, except where it's a resource we use inefficiently.

Comment by Veedrac on Forecasting Thread: AI Timelines · 2020-08-25T00:02:33.072Z · LW · GW
Scaling Language Model Size by 1000x relative to GPT3. 1000x is pretty feasible, but we'll hit difficult hardware/communication bandwidth constraints beyond 1000x as I understand.

I think people are hugely underestimating how much room there is to scale.

The difficulty, as you mention, is bandwidth and communication, rather than cost per bit in isolation. An A100 manages 1.6TB/sec of bandwidth to its 40 GB of memory. We can handle sacrificing some of this speed, but something like SSDs aren't fast enough; 350 TB of SSD memory would cost just $40k, but would only manage 1-2 TB/s over the whole array, and could not push it to a single GPU. More DRAM on the GPU does hit physical scaling issues, and scaling out to larger clusters of GPUs does start to hit difficulties after a point.

This problem is not due to physical law, but the technologies in question. DRAM is fast, but has hit a scaling limit, whereas NAND scales well, but is much slower. And the larger the cluster of machines, the more bandwidth you have to sacrifice for signal integrity and routing.

Thing is, these are fixable issues if you allow for technology to shift. For example,

  • Various sorts of persistent memories allow fast dense memories, like NRAM. There's also 3D XPoint and other ReRAMs, various sorts of MRAMs, etc.
  • Multiple technologies allow for connecting hardware significantly more densely than we currently do, primarily things like chiplets and memory stacking. Intel's Ponte Vecchio intends to tie 96 (or 192?) compute dies together, across 6 interconnected GPUs, each made of 2 (or 4?) groups of 8 compute dies.
  • Neural networks are amicable to ‘spatial computing’ (visualization), and using appropriate algorithms the end-to-end latency can largely be ignored as long as the block-to-block latency and throughput is sufficiently high. This means there's no clear limit to this sort of scaling, since the individual latencies are invariant to scale.
  • The switches themselves between the computers are not at a limit yet, because of silicon photonics, which can even be integrated alongside compute dies. That example is in a switch, but they can also be integrated alongside GPUs.
  • You mention this, but to complete the list, sparse training makes scale-out vastly easier, at the cost of reducing the effectiveness of scaling. GShard showed effectiveness at >99.9% sparsities for mixture-of-experts models, and it seems natural to imagine that a more flexible scheme with only, say, 90% training sparsity and support for full-density inference would allow for 10x scaling without meaningful downsides.

It seems plausible to me that a Manhattan Project could scale to models with a quintillion parameters, aka. 10,000,000x scaling, within 15 years, using only lightweight training sparsity. That's not to say it's necessarily feasible, but that I can't rule out technology allowing that level of scaling.

Comment by Veedrac on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T22:23:10.557Z · LW · GW

It might be possible to convince me on something like that, as it fixes the largest problem, and if Hanson is right that blackmail would significantly reduce issues like sexual harassment then it's at least worth consideration. I'm still disinclined towards the idea for other reasons (incentivizes false allegations, is low oversight, difficult to keep proportionality, can incentivize information hiding, seems complex to legislate), but I'm not sure how strong those reasons are.

Comment by Veedrac on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-21T20:54:50.420Z · LW · GW

I agree this makes a large fractional change to some AI timelines, and has significant impacts on questions like ownership. But when considering very short timescales, while I can see OpenAI halting their work would change ownership, presumably to some worse steward, I don't see the gap being large enough to materially affect alignment research. That is, it's better OpenAI gets it in 2024 than someone else gets it in 2026.

This constant seems to be very small, which is why compute had to drop all the way to ~$1k before any researchers worldwide were fanatical enough to bother trying CNNs and create AlexNet.

It's hard to be fanatical when you don't have results. Nowadays AI is so successful it's hard to imagine this being a significant impediment.

Excluding GShard (which as a sparse model is not at all comparable parameter-wise)

I wouldn't dismiss GShard altogether. The parameter counts aren't equal, but MoE(2048E, 60L) is still a beast, and it opens up room for more scaling than a standard model.

Comment by Veedrac on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T18:18:30.044Z · LW · GW
Robin Hanson argued that negative gossip is probably net positive for society.

Yes, this is what my post was addressing and the analogy was about. I consider it an interesting hypothesis, but not one that holds up to scrutiny.

Lying about someone in a damaging way is already covered by libel/slander laws.

I know, but this only further emphasizes how much better paying those who helped a conviction is. Blackmail is private, threat-based, and necessarily unpoliced, whereas the courts have oversight and are an at least somewhat impartial test for truth.

Comment by Veedrac on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-21T18:03:36.193Z · LW · GW

Gwern's claim is that these other institutions won't scale up as a consequence of believing the scaling hypothesis; that is, they won't bet on it as a path to AGI, and thus won't spend this money on abstract of philosophical grounds.

My point is that this only matters on short-term scales. None of these companies are blind to the obvious conclusion that bigger models are better. The difference between a hundred-trillion dollar payout and a hundred-million dollar payout is philosophical when you're talking about justifying <$5m investments. NVIDIA trained an 8.3 B parameter model as practically an afterthought. I get the impression Microsoft's 17 B parameter Turing-NLG was basically trained to test DeepSpeed. As markets open up to exploit the power of these larger models, the money spent on model scaling is going to continue to rise.

These companies aren't competing with OpenAI. They've built these incredibly powerful systems incidentally, because it's the obvious way to do better than everyone else. It's a tool they use for market competitiveness, not as a fundamental insight into the nature of intelligence. OpenAI's key differentiator is only that they view scale as integral and explanatory, rather than an incidental nuisance.

With this insight, OpenAI can make moonshots that the others can't: build a huge model, scale it up, and throw money at it. Without this understanding, others will only get there piecewise, scaling up one paper at a time. The delta between the two is at best a handful of years.

Comment by Veedrac on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-21T15:49:46.701Z · LW · GW

If OpenAI changed direction tomorrow, how long would that slow the progress to larger models? I can't see it lasting; the field of AI is already incessantly moving towards scale, and big models are better. Even in a counterfactual where OpenAI never started scaling models, is this really something that no other company can gradient descent on? Models were getting bigger without OpenAI, and the hardware to do it at scale is getting cheaper.

Comment by Veedrac on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T03:17:34.402Z · LW · GW

Legalizing blackmail gives people with otherwise no motivation to harm someone through the sharing of information the motive to do so. I'm going to take that as the dividing line between blackmail and other forms of trade or coercion. I believe this much is generally agreed on in this debate.

If you're going to legalize forced negative-sum trades, I think you need a much stronger argument that assuming that, on net, the positive externalities will make it worthwhile. It's a bit like legalizing violence from shopkeepers because most of the time they're punching thieves. Maybe that's true now, when shopkeepers punching people is illegal, but one, I think there's a large onus on anyone suggesting this to justify that it's the case, and two, is it really going to stay the case, once you've let the system run with this newfound form of legalized coercion?

Before I read these excerpts, I was pretty much in the ‘blackmail bad, duh’ category. After I read them, I was undecided; maybe it is in fact true that many harms from information sharing comes with sufficient positive externalities, and those that do not are sufficiently clearly delimited to be separately legislated. Having thought about it longer, I now see a lot of counterexamples. Consider some person, who:

  • had a traumatic childhood,
  • has a crush on another person, and is embarrassed about it,
  • has plans for a surprise party or gift for a close friend,
  • or the opposite; someone else is planning a surprise for them,
  • has an injury or disfiguration on a covered part of their body,
  • had a recent break-up, that they want to hold out on sharing with their friends for a while,
  • left an unkind partner, and doesn't want that person to know they failed a recent exam,
  • posts anonymously for professional reasons, or to have a better work-life balance,
  • doesn't like a coworker, but tries not to show it on the job.

I'm sure I could go on for quite a while. Legalizing blackmail means that people are de-facto incentivized to exploit information when it would harm people, because their payout stops being derived from the public interest, through mechanisms like public reception, appreciation from those directly helped by the reveal of information, or payment from a news agency, and becomes proportional almost purely to the damage you can do.

It's true that in some cases these are things which should be generally disincentivized or made illegal, nonconsensual pornography being a prime example. In general I don't think this approach scales, because the public interest is so context dependent. Sometimes it is in the public interest to share someone's traumatic childhood, spoil a surprise or tell their coworker they are disliked. But the reward should be derived from the public interest, not the harm! If we want to monetarily incentivize people to share information they have on sexual abuse, pay them for sharing information that led to a conviction. And if you're not wanting to do that because it causes the bad incentive to lie... surely blackmail gives more incentive to lie, and the accuser being paid requires the case never to have gone to trial, so is worse on all accounts.

Comment by Veedrac on Why haven't we celebrated any major achievements lately? · 2020-08-18T03:27:02.683Z · LW · GW

Apple's launch events get pretty big crowds, a lot of talk, and a lot of celebration.

Comment by Veedrac on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-14T02:25:43.525Z · LW · GW

Putting aside the general question, is OpenAI good for the world, I want to consider the smaller question, how do OpenAI's demonstrations of scaled up versions of current models affect AI safety?

I think there's a much easier answer to this. Any risks we face from scaling up models we already have with funding much less than tens of billions of dollars amounts to unexploded uranium sitting around, that we're refining in microgram quantities. The absolute worst that can happen with connectionist architectures is that we solve all the hard problems without having done the trivial scaled-up variants, and therefore scaling up is trivial, and so that final step to superhuman AI also becomes trivial.

Even if scaling up ahead of time results in slightly faster progress towards AGI, it seems that it at least makes it easier to see what's coming, as incremental improvements require research and thought, not just trivial quantities of dollars.

Going back to the general question, one good I see OpenAI producing is the normalization of the conversation around AI safety. It is important for authority figures to be talking about long-term outcomes, and in order to be an authority figure, you need a shiny demo. It's not obvious how a company could be more authoritative than OpenAI while being less novel.

Comment by Veedrac on is gpt-3 few-shot ready for real applications? · 2020-08-09T00:27:17.176Z · LW · GW

I think the results in that paper argue that it's not really a big deal as long as you don't make some basic errors like trying to fine-tune on tasks sequentially. MT-A outperforms Full in Table 1. GPT-3 is already a multi-task learner (as is BERT), so it would be very surprising if training on fewer tasks was too difficult for it.

Comment by Veedrac on is gpt-3 few-shot ready for real applications? · 2020-08-06T20:43:46.301Z · LW · GW

If the issue is the size of having a fine-tuned model for each individual task you care about, why not just fine-tune on all your tasks simultaneously, on one model? GPT-3 has plenty of capacity.

Comment by Veedrac on Are we in an AI overhang? · 2020-07-27T20:24:53.107Z · LW · GW

Density is important because it affects both price and communication speed. These are the fundamental roadblocks to building larger models. If you scale to too large clusters of computers, or primarily use high-density off-chip memory, you spend most of your time waiting for data to arrive in the right place.

Comment by Veedrac on Are we in an AI overhang? · 2020-07-27T16:25:48.995Z · LW · GW

Moore's Law is not dead. I could rant about the market dynamics that made people think otherwise, but it's easier just to point to the data.

https://docs.google.com/spreadsheets/d/1NNOqbJfcISFyMd0EsSrhppW7PT6GCfnrVGhxhLA5PVw

Moore's Law might die in the short future, but I've yet to hear a convincing argument for when or why. Even if it does die, Cerebras presumably has at least 4 node shrinks left in the short term (16nm→10nm→7nm→5nm→3nm) for a >10x density scaling, and many sister technologies (3D stacking, silicon photonics, new non-volatile memories, cheaper fab tech) are far from exhausted. One can easily imagine a 3nm Cerebras waffle coated with a few layers of Nantero's NRAM, with a few hundred of these connected together using low-latency silicon photonics. That would easily train quadrillion parameter models, using only technology already on our roadmap.

Alas, the nature of technology is that while there are many potential avenues for revolutionary improvement, only some small fraction of them win. So it's probably wrong to look at any specific unproven technology as a given path to 10,000x scaling. But there are a lot of similarly revolutionary technologies, and so it's much harder to say they will all fail.

Comment by Veedrac on Does human choice have to be transitive in order to be rational/consistent? · 2019-08-11T08:29:39.617Z · LW · GW

Here's a rather out-there hypothesis.

I'm sure many LessWrong members have had the experience of arguing some point piecemeal, where they've managed to get weak agreement on every piece of the argument, but as soon as they step back and point from start to end their conversation partner ends up less than convinced. In this sense, in humans even implication isn't transitive. Mathematics is an example with some fun tales I'm struggling to find sources for, where pre-mathematical societies might have people unwilling to trade two of A for two of B, but happy to trade A for B twice, or other such oddities.

It's plausible to me that the need for consistent models of the world only comes about as intelligence grows and allows people to arbitrage value between these different parts of their thoughts. Early humans and their lineage before that weren't all that smart, so it makes sense that evolution didn't force their beliefs to be consistent all that much—as long as it was locally valid, it worked. As intelligence evolved, occasionally certain issues might crop up, but rather than fixing the issue in a fundamental way, which would be hard, minor kludges were put in place.

For example, I don't like being exploited. If someone leads me around a pump, I'm going to value the end state less than its ‘intrinsic’ value. You can see this behaviour a lot in discussions of trolley problem scenarios: people take objection to having these thoughts traded off against each other to the degree it often overshadows the underlying dilema. Similarly, I find gambling around opinions intrinsically uncomfortable, and notice that fairly frequently people take objection to me asking them to more precisely quantify their claims, even in cases where I'm not staking an opposing claim. Finally, since some people are better at sounding convincing than I am, it's completely reasonable to reject some things more broadly because of the possibility the argument is an exploit—this is epistemic learned helplessness, sans ‘learned’.

There are other explanations for all the above, so this is hardly bulletproof, but I think there is merit to considering evolved defenses to exploitation that don't involve being exploit-free, as well as whether there is any benefit to something of this form. Behaviours that avoid and back away from these exploits seem fairly obvious places to look into. One could imagine (sketchily, non-endorsingly) an FAI built on these principles, so that even without a bulletproof utility function, the AI would still avoid self-exploit.

Comment by Veedrac on Why do humans not have built-in neural i/o channels? · 2019-08-10T07:39:40.549Z · LW · GW

Most of the complexity in human society is unnecessary to merely outperform the competition. The exploits that prehistoric humans found were readily available; it's just that evolution could only find them by inventing a better optimizer, rather than getting there directly.

Crafting spears and other weapons is a simple example. The process to make them could be instinctual, and very little intellect is needed. Similar comments apply to clothing and cooking. If they were evolved behaviours, we might even expect parts of these weapons or tools to grow from the animal itself—you might imagine a dedicated role for one of the members of a group, who grows blades or pieces of armour that others can use as needed.

One could imagine plants that grow symbiotically with some mobile species that farms them and keeps them healthy in ways the plant itself is not able to do (eg. weeding), and in return provides nutrition and shelter, which could include enclosed walling over a sizable area.

One could imagine prey, like rabbits, becoming venomous. When resistance starts to form, they could primarily switch to a different venom for a thousand generations before switching back. In fact, you could imagine such venomous rabbits aggressively trying to drive predators extinct before they had the chance to gain a resistance; a short term cost for long-term prosperity.

The overall point is that evolution does not have the insight to get around optimization barriers. Consider brood parasites, where birds lay eggs in other species' nests. It is hypothesized that a major reason this behaviour is successful is because of retaliatory behaviour when a parasite is ejected. Clearly these victim species would be better off if they just wiped the parasites off the face of the earth, as long as they survived the one-time increased retaliation, but evolutionary pressure resulted in them evolving complicity.

Comment by Veedrac on Why do humans not have built-in neural i/o channels? · 2019-08-09T05:26:58.330Z · LW · GW
And once you have one form of communication, the pressure to develop a second is almost none.

I agree with almost all of your post, but not this, given the huge number of channels of communication that animals have. Sound, sight, smell and touch are all important bidirectional communication channels between many social animals.

Comment by Veedrac on Why do humans not have built-in neural i/o channels? · 2019-08-08T14:02:55.894Z · LW · GW

There are lots of simple things that organisms could do to make them wildly more successful. The success of human society is a good demonstration of how very low complexity systems and behaviours can drive your competition extinct, magnify available resources, and more, the vast majority of which could be easily coded into the genome in principle.

However, evolution does not make judgements about the end result. The question is whether there is a path of high success leading to your desired result. Laryngeal nerves are a good demonstration that even basic impediments won't be worked around if you can't get there step by step with appropriate evolutionary pressure. Ultimately there seems to be no impetus for a half-baked neuron tentacle, and a lot of cost and risk, so that will probably never be the path to such organisms.

There are many examples of fairly direct inter-organism communication, like RNA transfer between organisms, and to the extent that cells think in chemicals, the fact they share their chemical environment readily is a form of this kind of communication. I'm not aware of anything similarly direct at larger scales, between neurons.