LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
No one (to my knowledge?) highlighted that the future might well go as follows:
“There’ll be gradual progress on increasingly helpful AI tools. Companies will roll these out for profit and connect them to the internet. There’ll be discussions about how these systems will eventually become dangerous, and safety-concerned groups might even set up testing protocols (“safety evals”). Still, it’ll be challenging to build regulatory or political mechanisms around these safety protocols so that, when they sound the alarm at a specific lab that the systems are becoming seriously dangerous, this will successfully trigger a slowdown and change the model release culture from ‘release by default’ to one where new models are air-gapped and where
Hmm, I feel like I always had something like this as one of my default scenarios. Though it would of course have been missing some key details such as the bit about model release culture, since that requires the concept of widely applicable pre-trained models that are released the way they are today.
E.g. Sotala & Yampolskiy 2015 and Sotala 2018 both discussed there being financial incentives to deploy increasingly sophisticated narrow-AI systems until they finally crossed the point of becoming AGI.
S&Y 2015:
Ever since the Industrial Revolution, society has become increasingly automated. Brynjolfsson [60] argue that the current high unemployment rate in the United States is partially due to rapid advances in information technology, which has made it possible to replace human workers with computers faster than human workers can be trained in jobs that computers cannot yet perform. Vending machines are replacing shop attendants, automated discovery programs which locate relevant legal documents are replacing lawyers and legal aides, and automated virtual assistants are replacing customer service representatives.
Labor is becoming automated for reasons of cost, efficiency and quality. Once a machine becomes capable of performing a task as well as (or almost as well as) a human, the cost of purchasing and maintaining it may be less than the cost of having a salaried human perform the same task. In many cases, machines are also capable of doing the same job faster, for longer periods and with fewer errors. In addition to replacing workers entirely, machines may also take over aspects of jobs that were once the sole domain of highly trained professionals, making the job easier to perform by less-skilled employees [298].
If workers can be affordably replaced by developing more sophisticated AI, there is a strong economic incentive to do so. This is already happening with narrow AI, which often requires major modifications or even a complete redesign in order to be adapted for new tasks. ‘A roadmap for US robotics’ [154] calls for major investments into automation, citing the potential for considerable improvements in the fields of manufacturing, logistics, health care and services.
Similarly, the US Air Force Chief Scientistʼs [78] ‘Technology horizons’ report mentions ‘increased use of autonomy and autonomous systems’ as a key area of research to focus on in the next decade, and also notes that reducing the need for manpower provides the greatest potential for cutting costs. In 2000, the US Congress instructed the armed forces to have one third of their deep strike force aircraft be unmanned by 2010, and one third of their ground combat vehicles be unmanned by 2015 [4].
To the extent that an AGI could learn to do many kinds of tasks—or even any kind of task—without needing an extensive re-engineering effort, the AGI could make the replacement of humans by machines much cheaper and more profitable. As more tasks become automated, the bottlenecks for further automation will require adaptability and flexibility that narrow-AI systems are incapable of. These will then make up an increasing portion of the economy, further strengthening the incentive to develop AGI. Increasingly sophisticated AI may eventually lead to AGI, possibly within the next several decades [39, 200].
Eventually it will make economic sense to automate all or nearly all jobs [130, 136, 289].
And with regard to the difficulty of regulating them, S&Y 2015 mentioned that:
... there is no clear way to define what counts as dangerous AGI. Goertzel [115] point out that there is no clear division between narrow AI and AGI and attempts to establish such criteria have failed. They argue that since AGI has a nebulous definition, obvious wide-ranging economic benefits and potentially significant penetration into multiple industry sectors, it is unlikely to be regulated due to speculative long-term risks.
and in the context of discussing AI boxing and oracles, argued that both AI boxing and Oracle AI are likely to be of limited (though possibly still some) value, since there's an incentive to just keep deploying all AI in the real world as soon as it's developed:
Oracles are likely to be released. As with a boxed AGI, there are many factors that would tempt the owners of an Oracle AI to transform it to an autonomously acting agent. Such an AGI would be far more effective in furthering its goals, but also far more dangerous.
Current narrow-AI technology includes HFT algorithms, which make trading decisions within fractions of a second, far too fast to keep humans in the loop. HFT seeks to make a very short-term profit, but even traders looking for a longer-term investment benefit from being faster than their competitors. Market prices are also very effective at incorporating various sources of knowledge [135]. As a consequence, a trading algorithmʼs performance might be improved both by making it faster and by making it more capable of integrating various sources of knowledge. Most advances toward general AGI will likely be quickly taken advantage of in the financial markets, with little opportunity for a human to vet all the decisions. Oracle AIs are unlikely to remain as pure oracles for long.
Similarly, Wallach [283] discuss the topic of autonomous robotic weaponry and note that the US military is seeking to eventually transition to a state where the human operators of robot weapons are ‘on the loop’ rather than ‘in the loop’. In other words, whereas a human was previously required to explicitly give the order before a robot was allowed to initiate possibly lethal activity, in the future humans are meant to merely supervise the robotʼs actions and interfere if something goes wrong.
Human Rights Watch [90] reports on a number of military systems which are becoming increasingly autonomous, with the human oversight for automatic weapons defense systems—designed to detect and shoot down incoming missiles and rockets—already being limited to accepting or overriding the computerʼs plan of action in a matter of seconds. Although these systems are better described as automatic, carrying out pre-programmed sequences of actions in a structured environment, than autonomous, they are a good demonstration of a situation where rapid decisions are needed and the extent of human oversight is limited. A number of militaries are considering the future use of more autonomous weapons.
In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control to AI/AGI systems.
(I also have a distinct memory of writing comments saying something "why does anyone bother with 'the AI could escape the box' type arguments, when the fact that financial incentives would make the release of those AIs inevitable anyway makes the whole argument irrelevant", but I don't remember whether it was on LW, FB or Twitter and none of those platforms has a good way of searching my old comments.)
8e9 on 8e9's ShortformOpenAI is thinking about how to safely and responsibly allow its models to produce NSFW content that goes beyond answering sex-ed “birds and the bees” type questions.
I haven’t read the whole thing yet, but I’m glad they released this document (which deals with many thorny questions besides).
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
d0themath on D0TheMath's ShortformA list of some contrarian takes I have:
People are currently predictably too worried about misuse risks
What people really mean by "open source" vs "closed source" labs is actually "responsible" vs "irresponsible" labs, which is not affected by regulations targeting open source model deployment.
Neuroscience as an outer alignment[^rough] strategy is embarrassingly underrated.
Better information security at labs is not clearly a good thing, and if we're worried about great power conflict, probably a bad thing [LW(p) · GW(p)].
Much research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics. Not bad in itself, but IMO this places heavy asterisks on the results they can get.
ML robustness research (like FAR Labs' Go stuff) does not help with alignment, and helps moderately for capabilities.
The field of ML is a bad field to take epistemic lessons from. Note I don't talk about the results from ML.
ARC's MAD seems doomed to fail.
People in alignment put too much faith in the general factor g. It exists, and is powerful, but is not all-consuming or all-predicting. People are often very smart, but lack social skills, or agency, or strategic awareness, etc. And vice-versa. They can also be very smart in a particular area, but dumb in other areas. This is relevant for hiring & deference, but less for object-level alignment.
People are too swayed by rhetoric in general, and alignment, rationality, & EA too, but in different ways, and admittedly to a lesser extent than the general population. People should fight against this more than they seem to (which is not really at all, except for the most overt of cases). For example, I see nobody saying they don't change their minds on account of Scott Alexander because he's too powerful a rhetorician. Ditto for Eliezer, since he is also a great rhetorician. In contrast, Robin Hanson is a famously terrible rhetorician, so people should listen to him more.
There is a technocratic tendency in strategic thinking around alignment (I think partially inherited from OpenPhil, but also smart people are likely just more likely to think this way) which biases people towards more simple & brittle top-down models without recognizing how brittle those models are.
[^rough] A non-exact term
ramblindash on Dating Roundup #3: Third Time’s the CharmSo I guess I'm not sure what you mean by that. I think it might be easier to support what I'm saying in the negative. Some example of inauthenticity or un-openness might be:
The problem with doing these things is that, to the extent that doing them was necessary to gain the relationship, you are now stuck with a relationship that is built on a papered-over incompatibility. If your plan is that you will fake a completely different personality/goals/interests, then you will now be in a relationship where you have to permanently keep faking that stuff while constantly being wary that your new partner might find out you were faking plus you have to spend a lot of time and energy doing stuff and/or interacting with someone you don't actually like, or else ending the relationship and being back at square 1, except that you've invested time/energy that you won't get back. There can be toned-down good versions of this bad strategy tho, I think, which are more like "putting your best foot forward" than like "being inauthentic."
Truth: Looking for a life partner, getting desperate
Good strategy [probably depends on age, for this one]: Open to various possibilities, see how it goes.
Bad strategy: Your date says they are really only looking for short term fun, and you agree that's all you are looking for too.
Truth: A talkative person who loves debating ideas
Good strategy: Tone it down a little, try to listen as much as you talk and try to "yes, and" or "that's interesting, tell me more about what led you to that" your date's points rather than "no but" (you can often make similar points either way)
Bad strategy: Just agree with everything your date says; even if you actually have a strong opposing view
Truth: Don't really care for hiking much
Good strategy [when trying out someone who loves hiking]: "I haven't been too into that before, tell me what you love about it? I'd be open to giving it another shot"
Bad strategy: "OMG I love hiking too!"
The problem that all these bad strategies have in common is that if they are successful, you end up with something you don't want.
erioire on ErioirE's shortform:How much of the developed world's economy is devoted to aesthetic personalization of products rather than accomplishing the essential functions of [product here]?
I am not saying aesthetics or personalization are 'bad', however I suspect that if the cost were quantified and demonstrated to people along with examples of more productive things that could be done with that money, many people might prefer forgoing some of our more wasteful things.
Example:
The cost of having thousands of different styles of sink faucet, instead of a small number of highly efficient and optimized faucet designs for distinct use cases [small household kitchen, large household kitchen, small form factor, high throughput restaurant]. These costs are created via the overhead caused by the redundant costs of engineering, design, manufacturing, and logistics.
These same factors apply more or less to every product where variations are sold primarily for aesthetic rather than functional purposes, particularly when they replace existing functional versions.
I believe the root cause of this inefficiency is our psychological tendency to overvalue ephemeral utility such as using possessions as social status tools rather than trying to optimize how we collectively use our limited economic output. For example, if a sizeable portion of the money in the market for functionally useless decorations were able to go towards medical research.
I do not know how a more efficient allocation of resources could be practically enacted. According to my understanding most attempts at centrally planned economies have even less success than the free market, as inefficient as it is.
If a large portion of people decided to prioritize their purchases better that would work, but that's obviously a very challenging coordination problem.
I would guess this is somewhat similar to having a network of friends: a polycule is even bound to be smaller. And I can totally imagine being emotionally, romantically, sexually attached to one set of partners and opinion-sharing attached to a slightly different set.
t3t on RobertM's ShortformAh, does look like Zach beat me to the punch :)
I'm also still moderately confused, though I'm not that confused about labs not speaking up - if you're playing politics, then not throwing the PM under the bus seems like a reasonable thing to do. Maybe there's a way to thread the needle of truthfully rebutting the accusations without calling the PM out, but idk. Seems like it'd be difficult if you weren't either writing your own press release or working with a very friendly journalist.
nevin-wetherill on Open Thread Spring 2024Hey, I'm new to LessWrong and working on a post - however at some point the guidelines which pop up at the top of a fresh account's "new post" screen went away, and I cannot find the same language in the New Users Guide or elsewhere on the site.
Does anyone have a link to this? I recall a list of suggestions like "make the post object-level," "treat it as a submission for a university," "do not write a poetic/literary post until you've already gotten a couple object-level posts on your record."
It seems like a minor oversight if it's impossible to find certain moderation guidelines/tips and tricks if you've already saved a draft/posted a comment.
I am not terribly worried about running headfirst into a moderation filter, as I can barely manage to write a comment which isn't as high effort of an explanation as I can come up with - but I do want that specific piece of text for reference, and now it appears to have evaporated into the shadow realm.
Am I just missing a link that would appear if I searched something else?
(Edit: also, sorry if this is the wrong place for this, I would've tried the "intercom" feature, but I am currently on the mobile version of the site, and that feature appears to be entirely missing there - and yes, I checked my settings to make sure it wasn't "hidden")
fowlertm on fowlertm's ShortformWe recently released an interview with independent scholar John Wentworth:
It mostly centers around two themes: "abstraction" (forming concepts) and "agency" (dealing with goal-directed systems).
Check it out!