Posts
Comments
This concept seems sufficiently useful that it should have a name
This post provided far more data than I needed to donate to support a site I use constantly.
One reason to prefer my position is that LLM's still seem to be bad at the kind of tasks that rely on using serial time effectively. For these ML research style tasks, scaling up to human performance over a couple of hours relied on taking the best of multiple calls, which seems like parallel time. That's not the same as leaving an agent running for a couple of hours and seeing it work out something it previously would have been incapable of guessing (or that really couldn't be guessed, but only discovered through interaction). I do struggle to think of tests like this that I'm confident an LLM would fail though. Probably it would have trouble winning a text based RPG? Or more practically speaking, could an LLM file my taxes without committing fraud? How well can LLM's play board games these days?
I think it's net negative - increases the profitability of training better LLM's.
Over-fascination with beautiful mathematical notation is idol worship.
I’d like to see the x-axis on this plot scaled by a couple OOMs on a task that doesn’t saturate: https://metr.org/assets/images/nov-2024-evaluating-llm-r-and-d/score_at_time_budget.png My hunch (and a timeline crux for me) is that human performance actually scales in a qualitatively different way with time, doesn’t just asymptote like LLM performance. And even the LLM scaling with time that we do see is an artifact of careful scaffolding. I am a little surprised to see good performance up to the 2 hour mark though. That’s longer than I expected. Edit: I guess only another doubling or two would be reasonable to expect.
Hmmm, my long term strategy is to build wealth and then do it myself, but I suppose that would require me to leave academia eventually :)
I wonder if MIRI would fund it? Doesn't seem likely.
Are you aware of the existing work on ignorance priors, for instance the maximum entropy prior (if I remember properly this is Jeffrey’s prior and gives rise to the KT estimator), also the improper prior which effectively places almost all of the weight on 0 and 1? Interestingly, the universal distribution does not include continuous parameters but does end up dominating any computable rule for assigning probabilities, including these families of conjugate priors.
My intuition is kind of the opposite - I think EA has a less coherent purpose. It's actually kind of a large tent for animal welfare, longtermism, and global poverty. I think some of the divergence in priorities between EA's is about impact assessment / fact finding, and a lot of ink is spilled on this, but some is probably about values too. I think of EA as very outward-facing, coalitional, and ideally a little pragmatic, so I don't think it's a good basis for an organized totalizing worldview.
The study of human rationality is a more universal project. It makes sense to have a monastic class that (at least for some years of their life) sets aside politics and refines the craft, perhaps functioning as an impersonal interface when they go out into the world - almost like Bene Gesserit advisors (or a Confessor).
I have thought about building it. The physical building itself would be quite expensive, since the monastery would need to meet many psychological requirements - it would have to be both isolated and starkly beautiful. Also, well-provisioned. So this part would be expensive; and its an expense that EA organizations probably couldn't justify (that is, larger and more extravagant than buying a castle). Of course, most of the difficulty would be in creating the culture - but I think that building the monastery properly would go a long way (if you build it, they will come).
I think a really hardcore rationality monastery would be awesome. Seems less useful on the EA side - EA’s have to interact with Overton window occupying institutions and are probably better off not totalizing too much.
I believe 3 is about right in principle but 5 describes humans today.
I don't think this proves probability and utility are inextricable. I prefer Jaynes' approach of motivating probabilities by coherence conditions on beliefs - later, he notes that utility and probability are on equal footing in decision theory as explained in this post, but (as far as I remember) ultimately decides that he can't carry this through to a meaningful philosophy that stands on its own. By choosing to introduce probabilities as conceptually prior, he "extricates" the two in a way that seems perfectly sensible to me.
I think that at least the weak orthogonality thesis survives these arguments in the sense that any coherent utility function over an ontology "closely matching" reality should in principle be reachable for arbitrarily intelligent agents, along some path of optimization/learning. Your only point that seems to contradict this is the existence of optimization daemons, but I'm confident that an anti-daemon immune system can be designed, so any agent that chooses to design itself in a way where it can be overtaken by daemons must do this with the knowledge that something close to its values will still be optimized for - so this shouldn't cause much observable shift in values.
It's unclear how much measure is assigned to various "final/limiting" utility functions by various agent construction schemes - I think this is far beyond our current technical ability to answer.
Personally, I suspect that the angle is more like 60 degrees, not 3.
“Cancel culture is good actually” needs to go in the hat ;)
You may be right that the benefits are worth the costs for some people, but I think if you have access to a group interested in doing social events with plausible deniability, that group is probably already a place where you should be able to be honest about your beliefs without fear of "cancellation." Then it is preferable to practice (and expect) the moral courage / accountability / honesty of saying what you actually believe and defending it within that group. If you don't have a group of people interested in doing social events with plausible deniability, you probably can't do them and this point is mute. So I'm not sure I understand the use case - you have a friend group that is a little cancel-ish but still interested in expressing controversial beliefs? That sounds like something that is not a rationalist group (or maybe I am spoiled by the culture of Jenn's meetups).
This kind of thing does justified harm to our community’s reputation. If you have fun arguing that only white people can save us while deliberately obfuscating whether you actually believe that, it is in fact a concerning sign about your intentions/seriousness/integrity/trustworthiness.
I don’t believe that these anthropic considerations actually apply, either to us, to oracles, or to Solomonoff induction. The arguments are too informal, it’s very easy to miscalculate Kolmogorov complexities and the measures assigned by the universal distribution using intuitive gestures like this. However I do think that this is a correct generalization of the idea of a malign prior, and I actually appreciate that you wrote it up this way because it makes clear that none of the load-bearing parts of the argument actually rely on reliable calculations (invocations of algorithmic information theory concepts have no been reduced to rigorous math, so the original argument is not stronger than this one).
My impression is that e.g. the Catholic church has a pretty deeply thought out moral philosophy that has persisted across generations. That doesn't mean that every individual Catholic understands and executes it properly.
- Perhaps Legg-Hutter intelligence.
- I'm not sure how much the goal matters - probably the details depend on the utility function you want to optimize. I think you can do about as well as possible by carving out a utility function module and designing the rest uniformly to pursue the objectives of that module. But perhaps this comes at a fairly significant cost (i.e. you'd need a somewhat larger computer to get the same performance if you insist on doing it this way).
- ...And yes, there does exist a computer program which is remarkably good at just chess and nothing else, but that's not the kind of thing I'm talking about here.
- Yes, the I/O channels should be fixed along with the hardware.
The standard method for training LLM's is next token prediction with teacher-forcing, penalized by the negative log-loss. This is exactly the right setup to elicit calibrated conditional probabilities, and exactly the "prequential problem" that Solomonoff induction was designed for. I don't think this was motivated by decision theory, but it definitely makes perfect sense as an approximation to Bayesian inductive inference - the only missing ingredient is acting to optimize a utility function based on this belief distribution. So I think it's too early to suppose that decision theory won't play a role.
What would you have to see proven about Solomonoff induction to conclude it does not have convergence/calibration problems? My friend Aram Ebtekar has worked on showing it converges to 50% on adversarial sequences.
Perhaps LLM's are starting to approach the intelligence of today's average human: capable of only limited original thought, unable to select and autonomously pursue a nontrivial coherent goal across time, learned almost everything they know from reading the internet ;)
No that seems paywalled, curious though?
An example I've been studying obsessively: https://www.lesswrong.com/posts/Yz33koDN5uhSEaB6c/sherlockian-abduction-master-list
How do you suggest advocating for this effectively?
I'm in Canada so can't access the latest Claude, so my experience with these things does tend to be a couple months out of date. But I'm not really impressed with models spitting out slightly wrong code that tells me what functions to call. I think this is essentially a more useful search engine.
I've noticed occasional surprises in that direction, but none of them seem to shake out into utility for me.
Semi-interestingly, my MMA school taught that it's best for the punch to arrive before the leading foot lands so that the punch carries your full weight. Many people at advanced levels weren't aware of this because we did not introduce it right away - if you try to do this before learning a few other details (and building strength), you run a risk of hurting your wrist by punching too hard.
I've been waiting to say this until OpenAI's next larger model dropped, but this has now failed to happen for so long that it's become it's own update, and I'd like to state my prediction before it becomes obvious.
This doesn't seem to be reflected in the general opinion here, but it seems to me that LLM's are plateauing and possibly have already plateaued a year or so ago. Scores on various metrics continue to go up, but this tends to provide weak evidence because they're heavily gained and sometimes leak into the training data. Still, those numbers overall would tend to update me towards short timelines, even with their unreliability taken into account - however, this is outweighed by my personal experience with LLM's. I just don't find them useful for practically anything. I have a pretty consistently correct model of the problems they will be able to help me with and it's not a lot - maybe a broad introduction to a library I'm not familiar with or detecting simple bugs. That model has worked for a year or two without expanding the set much. Also, I don't see any applications to anything economically productive except for fluffy chatbot apps.
I think this is a story about anthropic immortality.
Thanks! I am particularly interested on the hook grip calluses on thumbs, I'll look into that.
Calluses at the base of the finger (say, the knuckle-joint of the palm) are in my experience very difficult to classify. I get them there by climbing as you said, and though I also get some calluses on my fingers those tend to be less persistent and probably disappear most of the time (after climbing for awhile at my level of intensity I stop getting calluses). I have also seen them from biking - when I started out I used to look at people's palms a lot and never came up with a reliable way to distinguish this from weightlifting. But if you could go into some more detail on the differences, perhaps I'll add a more speculative entry and see how it stands up!
(If it's your first post on lesswrong, welcome! I think you'll find that kindness/politeness is the community norm here)
I haven't been able to verify that protestants don't wear a cross on a chain - it seems like they prefer an empty cross to the more catholic-coded crucifix, but this doesn't seem to be what you meant?
Technically the connection between the computability levels of AIT (estimability, lower/upper semi-computability, approximability) and the Turing degrees has not been worked out properly. See chapter 6 of Leike's thesis, though there is a small error in the inequalities of section 6.1.2. It is necessary to connect the computability of real valued functions (type two theory of effectivity) to the arithmetic hierarchy - as far as I know this hasn't been done, but maybe I'll share some notes in a few months.
Roughly, most classes don't have a universal distribution because they are not computably enumerable, but perhaps there are various reasons. There's a nice table in Marcus Hutter's original book, page 50.
It says that (negative log) universal probability is about the same as the (monotone) Kolmogorov complexity - in the discrete case up to a constant multiple. Basically, the Bayesian prediction is closely connected to the shortest explanation. See Li and Vitanyi's "An Introduction to Kolmogorov Complexity and its Applications."
Last question is a longer story I guess. Basically, the conditionals of the universal distribution are not lower semi-computable, and it gets even worse when you have to compare the expected values of different outcomes because of tie-breaking. But a good approximation of AIXI can still be computed in the limit.
Nice things about the universal distribution underlying AIXI include:
- It is one (lower semi-)computable probabilistic model that dominates in the measure-theoretic sense all other (lower semi-)computable probabilistic models. This is not possible to construct for most natural computability levels, so its neat that it works.
- Unites compression and prediction through the coding theorem - though this is slightly weaker in the sequential case.
- It has two very natural characterizations, either as feeding random bits to a UTM or as an explicit mixture of lower semi-computable environments.
With the full AIXI model, Professor Hutter was able to formally extend the probabilistic model to interactive environments without damaging the computability level. Conditioning and planning do damage the computability level but this is fairly well understood and not too bad.
I'm starting a google group for anyone who wants to see occasional updates on my Sherlockian Abduction Master List. It occurred to me that anyone interested in the project would currently have to check the list to see any new observational cues (infrequently) added - also some people outside of lesswrong are interested.
I would be very interested to see what you come up with!
Would be nice to be able to try it out without signing up
I think it's mostly about elite outreach. If you already have a sophisticated model of the situation you shouldn't update too much on it, but it's a reasonably clear signal (for outsiders) that x-risk from A.I. is a credible concern.
Personally I'm unlikely to increase my neuron-neuron bandwidth anytime soon, sounds like a very risky intervention even if possible.
My guess is that it would be very hard to get to millions of connections, so maybe we agree, but I'm curious if you have more specific info. Why is it not the bottleneck though?
I'm not a neuroscientist / cognitive scientist, but my impression is that rapid eye movements are already much faster than my conscious deliberation. Intuitively, this means there's already a lot of potential communication / control / measurement bandwidth left on the table. There is definitely a point beyond which you can't increase human intelligence without effectively adding more densely connected neurons or uploading and increasing clock speed. Honestly I don't think I'm equipped to go deeper into the details here.
You're talking about a handful of people, so the benefit can't be that large.
I'm not sure I agree with either part of this sentence. If we had some really excellent intelligence augmentation software built into AR glasses we might boost on the order of thousands of people. Also I think the top 0.1% of people contribute a large chunk of economic productivity - say on the order of >5%.
I think there's a reasonable chance everything you said is true, except:
What you're actually doing is doing the 5% boost, and never doing the other stuff.
I intend to do the other stuff after finishing my PhD - though its not guaranteed I'll follow through.
The next paragraph is low confidence because it is outside of my area of expertise (I work on agent foundations, not neuroscience):
The problem with neuralink etc. is that they're trying to solve the bandwith problem which is not currently the bottleneck and will take too long to yield any benefits. A full neural lace is maybe similar to a technical solution to alignment in the sense that we won't get either within 20 years at our current intelligence levels. Also, I am not in a position where I have enough confidence in my sanity and intelligence metrics to tamper with my brain by injecting neurons into it and stuff. On the other hand, even minor non-invasive general fluid intelligence increase at the top of the intelligence distribution would be incredibly valuable and profits could be reinvested in more hardcore augmentation down the line. I'd be interested to here where you disagree with this.
It almost goes without saying that if you can make substantial progress on the hardcore approaches that would be much, much more valuable than what I am suggesting, and I encourage you to try.
I think I'm more optimistic about starting with relatively weak intelligence augmentation. For now, I test my fluid intelligence at various times throughout the day (I'm working on better tests justified by algorithmic information theory in the style of Prof Hernandez-Orallo, like this one but it sucks to take https://github.com/mathemajician/AIQ but for now I use my own here: https://github.com/ColeWyeth/Brain-Training-Game), and I correlate the results with everything else I track about my lifestyle using reflect: https://apps.apple.com/ca/app/reflect-track-anything/id6463800032 which I endorse, though I should note it's owned/invented by a couple of my friends/former coworkers. I'll post some intermediate results soon. Obviously this kind of approach alone will probably only provide a low single digit IQ boost at most, but I think it makes sense to pick the low-hanging fruit first (then attempt incrementally harder stuff with the benefit of being slightly smarter). Also, accurate metrics and data collection should be established as early as possible. Ultimately I want to strap some AR goggles on and measure my fluid intelligence in real time ideally from eye movements in response to some subconscious stimulation (haven't vetted the plausibility of this idea at all).
The executive summary seems essentially right to me. My only objection is that Phase 4 should probably be human intelligence augmentation.
You raise an interesting point about virtue ethics - I don't think that is required for moral coherence, I think it is just a shortcut. A consequentialist must be prepared to evaluate ~all outcomes to approach moral coherence, but a virtue ethicist really only needs to evaluate their own actions, which is much easier.
Presented the Sherlockian abduction master list at a Socratica node:
Presented this list and idea at a Socratica node:
Verbal statements often have context dependent or poorly defined truth value, but observations are pretty (not completely) solid. Since useful models eventually shake out into observations, the binary truth values tagging observations "propagate back" through probability theory to make useful statements about models. I am not convinced that we need a fuzzier framework - though I am interested in the philosophical justification for probability theory in the "unrealizable" case where no element of the hypothesis class is true. For instance, it seems that universal distributions mixture is over probabilistic models none of which should necessarily be assumed true, but rather only the widest class we can compute.
Improving computer security seems possible but there are many other attack vectors. For instance, even if an A.I. can prove a system’s software is secure, it may choose to introduce social engineering style back doors if it is not aligned. It’s true that controlled A.I.‘s can be used to harden society but overall I don’t find that strategy comforting.
I’m not convinced that this induction argument goes through. I think it fails on the first generation that is smarter than humans, for basically Yudkowskian reasons.
Imagine that there are just a few labs with powerful A.I., all of which are responsible enough to use existing A.I. control strategies which have been prepared for this situation, and none of which open source their models. Now if they successfully use their A.I. for alignment, they will also be able to successfully use it for capabilities research. At some point, control techniques will no longer be sufficient, and we have to hope that by then A.I. aided alignment has succeeded enough to prevent bad outcomes. I don’t believe this is a serious possibility; the first A.I. capable of solving the alignment problem completely will also be able to deceive us about solving the alignment problem (more) easily - up to and including this point, A.I. will produce partial, convincing solutions to the alignment problem which human engineers will go forward with. Control techniques will simply threshold (below) the capabilities of the first unaligned A.I. that escapes, which is plausibly a net negative since it means we won’t have early high impact warnings. If occasional A.I. escapes turn out to be non-lethal, economic incentives will favor better A.I. control, so working on this early won’t really matter. If occasional A.I. escapes turn out to be lethal, then we will die unless we solve the alignment problem ourselves.