0 comments
Comments sorted by top scores.
comment by ChristianKl · 2023-07-10T11:20:53.526Z · LW(p) · GW(p)
If you look at top priorities of the US military, fighting wars in Iraq and Afghanistan was one of those. In both cases the results were horrific. If the US military would be competent they likely would manage to defund useless paratroopers. Snowden is safely in Russia despite there being strong interest of the US intelligence department to have him not be free.
If we look at lying, the Afghanistan war was not setup in a way to prevent soldiers in Afghanistan from lying to their superiors but the opposite. Soldiers were encouraged to lie in a way that exaggerated the power of the Afghan military. As a result predictions about what happened after the withdraw were wrong.
The drive to move away from ineffective cost-plus contracting doesn't come from inside the military but from Silicon Valley companies like Palantir and Andruil. Military contracting seems highly corrupt and inefficient.
Trump opposes those elites in multiple ways and he won his first election to the presidency. Right now he's in a good road to win the Republican primary again and maybe even the general election even when deep state elites don't like him to be in power.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T11:45:50.143Z · LW(p) · GW(p)
If you look at top priorities of the US military, fighting wars in Iraq and Afghanistan was one of those. In both cases the results were horrific. If the US military would be competent they likely would manage to defund useless paratroopers.
I generally think that "hypercompetent" was a poor word choice on my part. I agree that the Iraq and Afghanistan wars are excellent case studies, and that incompetence in the war on terror gives good bayesian updates, especially against "hypercompetence", as a hypercompetent inner regime in the US would be more capable of investigating the competence of US forces without worrying so much about stretching itself thin and giving away the positions of its VIPs (same goes for Russia). This is the evidence that my models are tested against. I think that, in reality, things related to military bureaucracies and officer's academies would evolve to goodhart metrics that fool an inner regime in particular. They would be specialists at that, especially if infiltration risk from foreign fintelligence agencies mean that sending trusted VIPs to infiltrate a wide variety of bureaucracies runs a major risk of a large proportion of those trusted agents being compromised by foreign intelligence agencies.
Snowden is safely in Russia despite there being strong interest of the US intelligence department to have him not be free.
I don't see how this is relevant. Russia needs a reputation for protecting elite US defectors, especially ones that deliberately permanently drastically alter global trends like Snowden.
Is Assange is a better example, since Ecuador and Sweden are within the US sphere of influence whereas Russia delinates the edges of the US sphere of influence? I understand Snowden extremely well (that stuff is my area of expertise, most stuff here is outside of it), but don't know much about Assange.
If we look at lying, the Afghanistan war was not setup in a way to prevent soldiers in Afghanistan from lying to their superiors but the opposite. Soldiers were encouraged to lie in a way that exaggerated the power of the Afghan military. As a result predictions about what happened after the withdraw were wrong.
Can you go into more details about this, or suggest further reading? This is interesting and relevant.
The drive to move away from ineffective cost-plus contracting doesn't come from inside the military but from Silicon Valley companies like Palantir and Andruil. Military contracting seems highly corrupt and inefficient.
A big reason why my competent inner regime model runs into falsifiability problems is because the competent trustworthy VIPs and the monopolized technology (e.g. functioning lie detectors) would be deployed conservatively, because stretching networks thin means more surface area, which means more risk of being compromised by foreign intelligence agencies.
I don't know much about the history of Palantir or other silicon valley defense companies, and am interested in that (specifically Palantir and similar companies). Its possible that the rise of highly effective silicon valley defense companies might have come from a competent inner regime getting better at recognizing and cooperating with outside talent during the 2010s, rather than just Palantir massively outperforming contractors on obvious metrics.
Trump opposes those elites in multiple ways and he won his first election to the presidency. Right now he's in a good road to win the Republican primary again and maybe even the general election even when deep state elites don't like him to be in power.
I don't think Trump is a good example, because even during his first campaign in 2015-16 when he was the least a creature of the system, it's still not clear to what extent his promised purges could have benefited various elites, as it was always pretty vague and the promised purges could have been retargeted to anyone inconvenient.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-07-11T16:17:28.391Z · LW(p) · GW(p)
Can you go into more details about this, or suggest further reading? This is interesting and relevant.
It's my understanding that the US military set goals to train the Afghan military to have certain capabilities. When high-level commanders had those goals they were uncritically accepting bogus numbers from lower level-commanders about the success of those at training the Afghan military.
The general narrative that the US military wanted to tell the US public involved that they were good at training the Afghan military.
I don't have good links for that and write based on my memory from the time after the withdrawal.
I don't see how this is relevant. Russia needs a reputation for protecting elite US defectors, especially ones that deliberately permanently drastically alter global trends like Snowden.
The fact that Snowden went from Hong Kong to Russia and found shelter there instead of being captured suggests that the US intelligence service was outmanuvered by Snowden and Wikileaks (Wikileaks helped Snowden here).
I don't know much about the history of Palantir or other silicon valley defense companies, and am interested in that (specifically Palantir and similar companies).
As Peter Thiel tells it: PayPal was faced with a lot of fraud. One key challenge that they had to overcome was to develop technology that used machine learning to detect fraud. After they sold PayPal, they thought that the same machine learning that was used to detect fraud could also be used to help analyze intelligence.
There was also the idea that it's very important to actually write the rules of how surveillance can be used into code, to reduce abuse of intelligence capabilities and Palantir's marketing is that they do that.
Its possible that the rise of highly effective silicon valley defense companies might have come from a competent inner regime getting better at recognizing and cooperating with outside talent during the 2010s, rather than just Palantir massively outperforming contractors on obvious metrics.
The fact that cost-plus contracting is bad at producing efficient technology is not about outperforming at specific metrics.
Palantir wouldn't have needed to sue the Army if the Army leaders would have been competent.
Palantir/Arduril/SpaceX which are the main big new defense contractors are all opposed to cost-plus contracting.
SpaceX is an important defense contractor but that wasn't Musk's motivation. It's just a good way for SpaceX to make money.
Arduril's founder that that the F-35 program ballooning over 1 trillion dollars was part of his motivation for thinking that the current military procurement is broken and needs innovation.
Pilots are important for airforce decision making and thus there isn't 1 trillion dollars invested in some advanced drone but in a craft that can be piloted by humans.
F-35 aren't the crucial component to winning the kind of wars in Iraq or Afghanistan. They also aren't the kind of weapon that are important to defend Taiwan. They are just what the airforce culture wants instead of being a choice made by a hypercompetent military.
I don't think Trump is a good example, because even during his first campaign in 2015-16 when he was the least a creature of the system, it's still not clear to what extent his promised purges could have benefited various elites, as it was always pretty vague and the promised purges could have been retargeted to anyone inconvenient.
Trump made Richard Grenell acting head of the DNI who's independent enough to go to Glenn Greenwald, Trump also put a lot of other people who were not creatures of the system into various important roles.
If the DNI is an organization that uses lie detection equipment the way you are asserting, this would be pretty threatening to the system.
Replies from: TrevorWiesinger, TrevorWiesinger, alkexr↑ comment by trevor (TrevorWiesinger) · 2023-07-11T19:34:24.902Z · LW(p) · GW(p)
If the DNI is an organization that uses lie detection equipment the way you are asserting, this would be pretty threatening to the system.
There is only one thing I'm asserting on lie detection technology, which is that people who think that lie detection technology doesn't work (even honest people with background in the area) are not to be trusted. That is exactly the kind of area where massive numbers of people, including experts in the field, receive military/counterintelligence disinformation instead of access to the actual technology (e.g. the heart attack gun), up to and including data poisoning on their datasets.
↑ comment by trevor (TrevorWiesinger) · 2023-07-11T19:31:15.953Z · LW(p) · GW(p)
The fact that cost-plus contracting is bad at producing efficient technology is not about outperforming at specific metrics.
Palantir wouldn't have needed to sue the Army if the Army leaders would have been competent.
Palantir/Arduril/SpaceX which are the main big new defense contractors are all opposed to cost-plus contracting.
I think this is very strong evidence against a "hypercompetent" inner regime, and a somewhat strong update against inner regime competence, and such stagnation in a key military area definitely explains the assumption of uniform incompetence that I've frequently encountered. I argue that it's a strong update, but not an extremely strong update, since an inner regime will be concerned with spreading it's VIPs thin, increasing surface area for infiltration, by sending trusted people to be immersed in the leadership AND middle management throughout the armed forces and defense industry. Doing so would make the US better at vastly outproducing and outperforming Russia and China at military production, and the status quo between the US and Russia and China only started declining in value ~10 years ago; before then, when these systems were set up, it potentially wouldn't be worth it for a competent survival-focused inner regime to spread itself thin in order to get the country really good at building up its forces. There's still the fact that ballooning costs at least indicate short time horizons in such an inner regime, which is extremely difficult to distinguish from incompetence and even uniform incompetence. But it still seems reasonable that a highly competent inner regime in the 90s and 2000s would just assume that these things have always been inefficient and wasteful and thus realistically always would be. A failure of imagination there is not incompatible with high levels of inner regime competence.
↑ comment by alkexr · 2023-07-11T17:36:18.134Z · LW(p) · GW(p)
F-35 aren't the crucial component to winning the kind of wars in Iraq or Afghanistan. They also aren't the kind of weapon that are important to defend Taiwan. They are just what the airforce culture wants instead of being a choice made by a hypercompetent military.
I mostly agree with your perception of state (or something) competence, but this seems to me like a sloppy argument? True, the US does have to prepare for the most likely wars, but they also have to be prepared for all other wars that don't happen because they were prepared, aka. deterrence. The F-35 may not be the most efficient asset when it comes to e.g. Taiwan, but it's useful in a wide range of scenarios, and its difficult to predict what exactly one will need as these platforms have to be planned decades in advance.
Not sure how to put this in a way that isn't overly combative, but since the only point you made where I have domain specific understanding seems to be sloppy, it makes me wonder how much I should trust the rest? At a glance it doesn't look like artight reasoning.
EDIT: As a sidenote, what the airforce culture wants is in itself a military consideration. It's often better to have the gear that works well with established doctrine than some other technology that outperforms it on paper.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-07-11T23:26:25.149Z · LW(p) · GW(p)
The F-35 may not be the most efficient asset when it comes to e.g. Taiwan, but it's useful in a wide range of scenarios
I didn't just talk about Taiwan, I also talked about Afghanistan and Iraq. Those were wars that the US military essentially lost.
The US military failed to create the kind of innovation that they would have needed to pursue those conflicts successfully.
F-35 also doesn't help with the Ukraine war.
and its difficult to predict what exactly one will need as these platforms have to be planned decades in advance.
A key alternative for the F-35 plan would have been unmanned aircraft for the same job.
True, the US does have to prepare for the most likely wars, but they also have to be prepared for all other wars that don't happen because they were prepared, aka. deterrence.
What wars do you think the F-35 deters?
At a glance it doesn't look like artight reasoning.
When it comes to military matters, the beliefs I have come from reading some articles and interviews.I wouldn't be surprised if there are other people here with a lot more domain knowledge.
Evaluating whether or not the military spends its money well is generally hard as a lot of relevant information is secret.
Palmer Luckey from Anduril who would know seems to say that there was a severe underinvestment into autonomous vehicles.
Alex Karp from Palantir also speaks about underinvestment of the military into AI.
Replies from: alkexr↑ comment by alkexr · 2023-07-12T01:12:32.033Z · LW(p) · GW(p)
I'm not an expert either, and I won't try to end the F-35 debate in a few sentences. I maintain my position that the original argument was sloppy. "F-35 isn't the best for specific wars X, Y and Z, therefore it wasn't a competent military decision" is non sequitur. "Experts X, Y and Z believe that the F-35 wasn't a competent decision" would be better in this case, because that seems to be the real reason why you believe what you believe.
Replies from: ChristianKl↑ comment by ChristianKl · 2023-07-12T11:24:59.884Z · LW(p) · GW(p)
"F-35 isn't the best for specific wars X, Y and Z, therefore it wasn't a competent military decision" is non sequitur. "Experts X, Y and Z believe that the F-35 wasn't a competent decision" would be better in this case, because that seems to be the real reason why you believe what you believe.
Generally, in security threat modelling is important. There's the saying "Generals always fight the last war" which is about a common mistake in militaries that they are not sufficiently doing threat modeling and investing in technology that would actually help with the important threats.
There are forces where established military units aren't looking for new ways of acting. Pilots wants planes that are flown by pilots. Defense contractors want to produce weapons that match their competencies.
I do see the question of whether a military is able to think well about future threats and then invest money into building technology to counter those threats as an important aspect of competency.
This is not that I just copied the position from someone else but I have a model feed by what I read and which I apply.
The argument I made seems also be made by military generals:
Earlier this month, the US Navy’s top officer, Admiral Michael Gilday, lit into defence contractors at a major industry conference for lobbying Congress to “build the ships that you want to build” and “buy aircraft we don’t need” rather than adapt to systems needed to counter China. “It’s not the ’90s any more,”
Aircraft we don't need is what the F-35 program is about. The main threat related to countering China is defending Taiwan (and hopefully in a way where there's deterrence that prevents the war from happening in the first place).
EDIT:
If you would make some argument about the Navy already having the correct position here because Michael Gilday is advocating the correct position, if there would be a hypercompetent faction in the military, that group should have no problems with exerting their power in a way to get defense contractors to produce the weapons that high military leaders consider desirable to develop.
comment by alkexr · 2023-07-10T10:33:02.365Z · LW(p) · GW(p)
- You're saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don't apply to them, because of "out of distribution" reasons. It seems to me that "out of distribution" here stands as a synonym of magic [? · GW].
- You're saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
- It seems to me that lie detection technology makes the scenario you're worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
- It's worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it's something that makes such accidents less, not more, likely.
My arguments end here. From this point on, I just want to talk about... smell. Because I smell anxiety.
Your framing isn't "here is what I think is most likely the truth". Your framing is "here is something potentially very dangerous that we don't know and can't possibly ever really know".
Also, you explicitly, "secretly" ask for downvotes. Why? Is something terrible going to happen if people read this? It's just a blogpost. No, it's not going to accidentally push all of history off course down into a chasm.
Asking for downvotes also happens to be a good preemptive explanation of negative reception. Just to be clear, I downvoted not because I was asked to. I downvoted because of poor epistemic standards.
Do note that I'm aware that very limited information is available to me. I don't know anything about you. I'm just trying to make sense of the little I see, and the little I see strongly pattern matches with anxiety. This is not any sort of an argument, of course, and there isn't necessarily anything wrong with that, but I feel it's still worth bringing up.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T12:12:54.740Z · LW(p) · GW(p)
- You're saying that these hypothetical elites are hypercompetent to such a hollywoodical degree that normal human constraints that apply to everyone else don't apply to them, because of "out of distribution" reasons. It seems to me that "out of distribution" here stands as a synonym of magic [? · GW].
I think that "hypercompetent" was a poor choice of words on my part, since the crux of the post is that it's difficult to evaluate the competence of opaque systems.
- You're saying that these hypothetical elites are controlling the world thanks to their hypercompetence, but are completely oblivious to the idea that they themselves could lose control to an AI that they know to be hypercompetent relative to them.
It's actually the other way around; existing as an inner regime means surviving many years of evolutionary pressure from being targeted by all the rich, powerful, and high IQ people in any civilization or sphere of influence (and the model describes separate inner regimes in the US, China, and Russia which are in conflict, not a single force controlling all of civilization). That is an extremely wide variety of evolutionary pressure (which can shape people's development), because any large country has an extremely diverse variety of rich, powerful, and/or high-IQ people.
- It seems to me that lie detection technology makes the scenario you're worried about even less likely? It would be enough for just a single individual from the hypothetical hypercompetent elites to testify under lie detection that they indeed honestly believe that AI poses a risk to them.
The elites I'm describing are extremely in-tune with the idea that it's worthwhile for foreign intelligence agencies to shape information warfare policies to heavily prioritize targeting members of the inner regime. Therefore, it's worthwhile for themselves to cut themselves off of popular media entirely, and be extremely skeptical of anything that unprotected people believe.
- It's worth pointing out I suppose that the military industrial complex is still strongly interested in the world not being destroyed, no matter how cartoonishly evil you may believe they are otherwise, unless they are a Cthulhu cult or something. They could still stumble into such a terrible outcome via arms race dynamics, but if hypercompetence means anything, it's something that makes such accidents less, not more, likely.
I definitely think that molochian races-to-the-bottom are a big element here. I hope that the people in charge aren't all nihilistic moral relativists, even though that seems like the kind of thing that would happen.
My arguments end here. From this point on, I just want to talk about... smell. Because I smell anxiety.
You definitely sensed/smelled real anxiety! The more real exposure people get to dangerous forces like intelligence agencies, the more they realize that it makes sense to be scared. I definitely think that the prospect is scary of EA/LW of stepping on the toes of powerful people, and being destroyed or damaged as a result.
Is something terrible going to happen if people read this? It's just a blogpost. No, it's not going to accidentally push all of history off course down into a chasm.
Specific strings of text in specific places can absolutely push all of history off course down into a chasm!
As you might imagine, writing this post feels a bit tough for a bunch of reasons. It’s a sensitive topic, there are lots of complicated legal issues to consider, and it’s generally a bit weird to write publicly about an agency that’s in the middle of investigating you (it feels a little like talking about someone in the third person without acknowledging that they’re sitting at the dinner table right next to you).
-Howie Lempel, EAforum, Regulatory Inquiry into Effective Ventures Foundation UK [EA · GW]
↑ comment by alkexr · 2023-07-11T17:05:15.133Z · LW(p) · GW(p)
- You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
- You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
- You believe that this policy is a better adaptation to this environment than what anyone else could come up with
- These networks have adapted by being so extremely secretive that it's virtually impossible to know anything about them
- You happen to know that these networks have certain (self-perceived) interests related to AI
- You happen to believe that these networks are dangerous forces and it makes sense to be scared
- This image that you have of these networks leads to anxiety
- Anxiety leads to you choosing and promoting a strategy of self-deterrence
- Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf
Given the above premises (which, for the record, I don't share), you have to conclude that there's a reasonable chance that your own theory is an active information battleground.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T17:40:24.144Z · LW(p) · GW(p)
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because
1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy
2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I'm arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).
comment by Mitchell_Porter · 2023-07-09T23:20:04.152Z · LW(p) · GW(p)
You've said before, don't lobby to ban AI, because that might make you an enemy of the deep state. Here you say, focus on speeding up AI safety, rather than slowing down AI capabilities, for the same reason.
Well, I would just point out that the risks are what they are, regardless of whether or not the people in charge can accept it. If the situation really is desperate enough that only a global ban will save us, and if the hidden hypercompetent elites prevent a ban from happening, then that means they were incompetent when faced with this particular crisis.
In any case, while a ban might obtain some grassroots support from the vast majority of humanity, who might regard the creation of AI as an utterly unnecessary gamble, for now, the actual powers in the world seem to favor a mix of regulation, AI safety, and accelerationism.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T12:25:50.959Z · LW(p) · GW(p)
You've said before, don't lobby to ban AI, because that might make you an enemy of the deep state. Here you say, focus on speeding up AI safety, rather than slowing down AI capabilities, for the same reason.
It's weird to think I'm getting a reputation for that stance! But it makes sense since I keep making that point. I'm not actually particularly attached to that stance, I just think that awareness of key factors are undersupplied in the rationalist community. I think that more people can easily read more about how rapid AI capabilities research is considered a US national security priority [EA · GW].
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2023-07-12T08:14:36.930Z · LW(p) · GW(p)
read more about how rapid AI capabilities research is considered a US national security priority
That is a 2021 document. It dates from the era before ChatGPT. AI is treated as a mighty new factor in human affairs, but not as a potentially sovereign factor, independent of human agency.
You know, I'm a transhumanist, and my own push is towards more work on what OpenAI has dubbed "superalignment", because a successful global ban seems difficult, and I do think there is some chance of using rising AI capabilities to solve the unknown number of problems that have to be solved, before we know how to perform superalignment. Also, I don't know what the odds are that superintelligent AI will cause human extinction. There's clearly a nonzero risk, and it may be a very high risk, but maybe emergent behaviors and something less than superalignment, do sometimes add up to a human-friendly outcome.
Nonetheless, I actually think it would be a somewhat healthier situation, if there was a movement and a lobby group overtly devoted to banning superhuman AI - to preventing it from ever coming into being. I think that would be something distinct from MIRI and Eliezer, who in the end are not luddite human preservationists; their utopia is a transhumanist technocracy that has (through the "pivotal act") made the world safe for superalignment research.
Looking at existing political forces, I can imagine a Green Party supporting this; and (in America) maybe populist elements of the two main parties. I don't think it's politically impossible; maybe RFK Jr would support it, and he's been endorsed by Jack Dorsey.
It's funny, but Perry Metzger, an extropian who is arguably Eliezer's main Twitter antagonist at the moment, has been tweeting about how EA billionaires have installed anti-AI policy in the EU, and they might do it in the US too, and it's time for pro-AI groups to fight back; and judging by his public statements, Marc Andreessen (for example) might yet end up backing covertly accelerationist lobby groups. The PR problem for accelerationism, is that there isn't much of a political base for making the human race obsolete. Publicly, they'll just talk about using AI to cure everything and increase quality of life, and they'll save their cyborgist thoughts for their fellow alts on e/acc Twitter.
Another element of elite opinion that's not to be dismissed, are the outright skeptics. There's a particular form of this that has emerged in the wake of the great AI panic of 2023, according to which the tech CEOs are backing all this talk of dangerous AI gods, in order to distract from social justice issues, or to keep the VC money coming in, or to make themselves a regulated monopoly. (If /r/sneerclub was still active, you could read about this there, but they actually suspended operations in protest at Reddit's new regime, and retreated to Mastodon.) This line of thought seems to be most popular among progressives, who are intellectually equipped for cynical deflationary readings of everything, but not so much for the possibility of a genuine sci-fi scenario actually coming true; and it might get an increasing platform in mass media reporting on AI - I'm thinking of recent articles by Nitasha Tiku and Kevin Roose.
My advice for "AI deniers" is that, if they truly want to be relevant, they need to support an outright ban - not just snipe at the tech CEOs and cast aspersions at the doomer activists. But then, I guess they just don't think that superhuman AI has any actual chance of emerging in the near future.
comment by AnthonyC · 2023-07-10T03:54:34.946Z · LW(p) · GW(p)
promote the people who are actually top performers
I know this was a very small part of the post, and maybe even off topic, but why do we so often have the idea that the people performing well in their current jobs are the ones we should promote out of them? Sometimes it's true, sure, and even more often those people should get more raises, but many other times the next higher role requires significantly different skills and qualities.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T11:48:19.177Z · LW(p) · GW(p)
I definitely agree, people who perform well in one role are out-of-distribution for the promoted role. This definitely makes it hard for any organization to end up with competent people at the top, especially due to people getting good at office politics and goodharting the entire promotion process by specializing in eliminating rivals.
comment by lc · 2023-07-10T23:52:39.286Z · LW(p) · GW(p)
If functioning lie detectors were to be invented, incentive structures as we know them would be completely replaced with new ones that are far more effective. E.g. you can just force all your subordinates to wear an EEG or go into an fMRI machine, and ask all of them who the smartest/most competent person in the office is, promote the people who are actually top performers, and fire any cliques/factions of people who you detect as coordinating around a common lie. Most middle managers with access to functioning lie detection technology would think of those things, and many other strategies that have not yet occurred to me, over the course of the thousands of hours they spend as middle managers with access to functioning lie detection technology.
Maybe, but as far as I can tell (at least, hearing from secondhand from people in the military and with security clearances)... This is not how they are used? Instead the questions are extremely specific, stuff akin to "Are you an agent of foreign intelligence services?"
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T11:53:13.845Z · LW(p) · GW(p)
My model definitely points towards a large proportion of polygraph machines just being empty boxes, since it's about monopolizing advanced technology, not just monopolizing competent VIPs (as a side note, I think that "hypercompetent" was a poor choice of words on my part, since the crux of the post is that it's difficult to evaluate the competence of opaque systems). Some lie detection machines will be running better and more modern tech than others, but it still makes sense to ask everyone a question that (ideally) only agents of foreign intelligence services would give off weird readings in response to.
A big reason why my competent inner regime model runs into falsifiability problems is because the competent trustworthy VIPs and the monopolized technology (e.g. functioning lie detectors) would be deployed conservatively, because stretching networks thin means more surface area, which means more risk of being compromised by foreign intelligence agencies.
comment by Nicholas / Heather Kross (NicholasKross) · 2023-07-09T20:12:20.017Z · LW(p) · GW(p)
Nitpick: I think "gulf states" should probably be capitalized like "Gulf states".
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2023-07-11T12:14:53.028Z · LW(p) · GW(p)
Fixed, thanks.