Posts

Templarrr's Shortform 2024-04-15T08:39:58.129Z
Cult of equilibrium 2024-04-04T09:19:51.492Z
"Wide" vs "Tall" superintelligence 2023-03-19T19:23:57.966Z
Stop calling it "jailbreaking" ChatGPT 2023-03-10T11:41:06.478Z

Comments

Comment by Templarrr (templarrr) on AI #93: Happy Tuesday · 2024-12-08T17:42:47.048Z · LW · GW

what the median essay, story, or response to the assignment will look like so they can avoid and transcend it all

Obligatory joke about how terrible our education is, that half of the scores are below median! 

Comment by Templarrr (templarrr) on AI #89: Trump Card · 2024-11-11T15:30:25.516Z · LW · GW

they’re 99% sure are AI-generated, but the current rules mean they can’t penalise them.

The issue is proving it.

That is very much not the issue. The issue is that academy spent last few hundred years to make sure papers are written in the most inhuman way possible. No human being ever talks like whitepapers are written. The "we can't distinguish if this was written by a machine or human that is really good at pretending  being one" can't be a problem if it was heavily encouraged for centuries. Also fun reverse-Turing test situation. 

Comment by Templarrr (templarrr) on Occupational Licensing Roundup #1 · 2024-10-31T09:49:55.066Z · LW · GW

Two things to note.

First - I feel like putting every occupation in the same pile and deciding are you for or against licensing isn't helpful? I personally don't need licensed lawnmower, but I would very much prefer licensed doctor. The cost of mistake in two occupations differs a lot and can be used for a threshold which jobs should require a license.

Second - there should be a difference between doing a thing to yourself (argument can be made even that here we shouldn't have any limits), doing things for free to your friends/relatives with their full knowledge of your skill level and experience (most of the non life-threatening things can probably be allowed here) and selling your craft for money.

Comment by Templarrr (templarrr) on AI #87: Staying in Character · 2024-10-29T20:15:30.073Z · LW · GW

llms don’t work on unseen data

Unfortunately I hear this quite often, sometimes even from people who should know better. 

A lof of them confuses this with the actual thing that exist: "supervised ML models (which LLM is just a particular type of) tend to work much worse on the out-of-training distribution data". If you train your model to determine the volume of apples and oranges and melons and other round-y shapes - it will work quite well on any round-y shape, including all kind of unseen ones. But it will suck at predicting the volume of a box.

You don't need model to see every single game of chess, you just need the new situations to be within the distribution built from massive training data, and they most often are.

Real out-of-distribution example in this case would've been to only train it on chess and then ask what is the next best move in checkers (relatively easy OOD - same board, same type of game) or minecraft.

Comment by Templarrr (templarrr) on AI #85: AI Wins the Nobel Prize · 2024-10-13T18:49:12.400Z · LW · GW

the *real* problem is the huge number of prompts clearly designed to create CSAM images

So, people with harmful and deviant from the social norm taste instead of causing problems in the real world try to isolate themselves in the digital fantasies and that is a problem...exactly how?

I mean, obviously, it's coping mechanism, not trying to fix the problem, but also our society isn't known to be very understanding to people coming out with this kind of deviations when they want to fix it. 

Comment by Templarrr (templarrr) on Monthly Roundup #22: September 2024 · 2024-10-03T10:00:28.899Z · LW · GW

India getting remarkably better in at least one way, as the percentage of the bottom 20% who own a vehicle went from 6% to 40% in only ten years.

Is it better though? This stats show only "who owns a vehicle" not "who is happy about the fact". It doesn't show how many people were forced to take mortgage because owning a vehicle was an only way to live. In ideal world nobody should have a need for a personal vehicle to survive, leaving it only as a luxury, not a lifeline.

Comment by Templarrr (templarrr) on AI #80: Never Have I Ever · 2024-09-11T12:39:38.244Z · LW · GW

The inclusion of ‘natural disaster’ shows that this simply is not a thing people are thinking about at all.

Chicxulub and Popigai impactors were both pretty natural. Actually within the listed 5 things "natural disasters" is the only category that had actual extinction events in the past. So I'm a bit confused with this comment.

Comment by Templarrr (templarrr) on Monthly Roundup #21: August 2024 · 2024-08-21T11:35:33.802Z · LW · GW

Peter Thiel on his struggle to leave California

Honestly, at this point one with some self-awareness would start to suspect that the problem may not be on the cities side. Nothing wrong with the search for the better place for themself, everyone is entitled to it, but when literally nothing fits...

Comment by Templarrr (templarrr) on Beware the science fiction bias in predictions of the future · 2024-08-19T09:08:08.984Z · LW · GW

If the answer is yes to all of the above

Point 2 needs rephrasing. 


"Does it sound exciting or boring?" "Yes"

Comment by Templarrr (templarrr) on Monthly Roundup #20: July 2024 · 2024-07-28T10:26:19.249Z · LW · GW

Most Importantly Missing

Where's my "Babylon 5"? Honestly, risking to get the anger of trekkies here, but it's "DS9 but better"

Comment by Templarrr (templarrr) on Monthly Roundup #20: July 2024 · 2024-07-28T10:14:45.307Z · LW · GW

Does the Nobel Prize sabotage future work?

My first thought was "regression to the mean" and judging from a lot of comments in the original post I'm not the only one. If you're on the top of the world, the only way to go is down. 

Comment by Templarrr (templarrr) on Monthly Roundup #20: July 2024 · 2024-07-28T10:07:39.744Z · LW · GW

Your periodic reminder.

Except there should also be an understanding what constitutes a constructive "questioning the science". There can be no debate between quantum physicists and cobbler about quantum physics. Questioning the science isn't "I decided I know better" and isn't "I don't want to beleive in your results" (by itself). You question the science by checking, double-checking, finding weaknesses in the previous science. And by making new, better, more rigorous science. 

People tend to forget this part even more often than the part about questioning being the integral part of science.

Comment by Templarrr (templarrr) on AI #74: GPT-4o Mini Me and Llama 3 · 2024-07-27T18:33:47.732Z · LW · GW

Compared to how much carbon a human coder would have used? Huge improvement.

JSON formatting? That's literally millisecond in dedicated tool. And contrary to LLM will not make mistakes you need to control for. Someone using LLM for this is just someone too lazy to turn on the brain.

That said, it's not like people not using their brain isn't frequent occurence, but still... not something to praise. 

Comment by Templarrr (templarrr) on AI #73: Openly Evil AI · 2024-07-23T10:55:38.766Z · LW · GW

I'm not implying, I'm saying it outright. Depending on the way you measure and the source for the data police only solves between 5% and 50% of the crime. And that only takes into account reported crime, so actual fraction, even measured in the most police-friendly way, is lower. At the very least the same amount of criminals are walking free as being caught.

Criminals are found in places police check for criminals. And those become stats, sociological profiles and training data for AI to pick up patterns from.

Comment by Templarrr (templarrr) on AI #73: Openly Evil AI · 2024-07-23T10:25:45.088Z · LW · GW

On the topic of "why?" reaction - that is just how supervised machine learning works. Model learns the patterns in the training data (and interpolate between data points with the found patterns). And the training data only contains the information about prosecutions, not actual crime. If (purely theoretical) people called Adam were found in the training data guilty 100% of the time - this pattern will be noticed. Even though the name has nothing to do with the crime.

It's really difficult to get truly unbiased training data. There are bias mitigation algorithms that can be applied after the fact on the model trained on biased data but they have their own problems. First of all their efficiency in bias mitigation itself usually varies from "bad" to "meh" at best. And more importantly most of them work by introducing counter-bias that can infuriate people that one will be biased against now and that counter-bias will have its own detrimental secondary effects. And this correction usually makes the model in general less accurate.

Giving physical analogy to attempts to fix the model "after the fact"... If one of the blades of the helicopter get chipped and become 10cm shorter - you don't want to fly on this unbalanced heavy rotating murder shuriken now. You can "balance" it by chipping the opposite blade the same way or all the blades the same way, but while solving the balance now you have less thrust and smaller weight of the component and you need to update everything else in the helicopter to accommodate etc etc. So in reality you just throw away chipped blade and get a new one.

Unfortunately sometimes you can't get unbiased data because it doesn't exist.

Comment by Templarrr (templarrr) on AI #73: Openly Evil AI · 2024-07-22T13:44:55.629Z · LW · GW

virtually all the violent crime in the city was caused by a few hundred people

virtually all the violent crime prosecutions was caused by a few hundred people. Which is very much not the same. That's the real reason why EU "pretend that we do not know such things". If the goal is to continue prosecute who we always prosecuted - we can use AI all the way. If we want to do better... we can't.

Comment by Templarrr (templarrr) on Medical Roundup #3 · 2024-07-10T09:20:59.662Z · LW · GW

The situation is that there is a new drug that is helping people without hurting anyone, so they write an article about how it is increasing ‘health disparities.’

Isn't "solving for the equilibrium" a big thing in this community? That's what articles like this do - count not only first order effects, but also what those lead to. 

Specifically - people with money and resources gobbling up all the available "miracle" drug, making people with less resources unable to get one even for the established medical use. So yeah, I really don't see a problem with the article title (specifically title, hadn't read the content!), it's stating the facts. Finding new usage for limited resource makes poor people access to it even worse than before.

Of course, "let's make less miracle drugs" isn't a solution, solution is to make more of them, so that everyone who need one can get one. Finding new cures isn't the problem, terrible distribution pipelines is.

Comment by Templarrr (templarrr) on AI #71: Farewell to Chevron · 2024-07-05T10:39:39.395Z · LW · GW

only to find out it is censored enough I could have used DALL-E and MidJourney.

Last "censoring" of Stable Diffusion was done via the code and could've been turned off via 2 lines of code change. Was it done other way this time? 

Comment by Templarrr (templarrr) on AI #69: Nice · 2024-06-24T11:59:05.091Z · LW · GW

Probably some people would have, if asked in advance, claimed that it was impossible for arbitrarily advanced superintelligences to decently compress real images into 320 bits

And it still is. 

This is really pushing the definition of what can be considered "image compression". Look, I can write a sentence "black cat on the chessboard" and most of you (except the people with aphantasia) will see an image in their mind eye. And that phrase is just 27 bytes! I have a better "image compression" than in the whitepaper! Of course everyone see different image, but that's just "high frequency details", not the core meaning.

Comment by Templarrr (templarrr) on GPT-4o My and Google I/O Day · 2024-05-20T11:59:03.965Z · LW · GW

First it was hands. Then it was text, and multi-element composition. What can we still not do with image generation?

Text generation is considerably better, but still limited to few words, maybe few sentences. Ask it to generate you a monitor with Python code on it and you'll see current limitations of this. It is an improvement for sure but in no way "solved" task.

Comment by Templarrr (templarrr) on Monthly Roundup #18: May 2024 · 2024-05-20T09:57:49.461Z · LW · GW

Europeans... vastly less rich than they could be.

POSIWID. Metric being optimized is not "having the most money". It is debatable if it should be, as one of the "poor Europeans" my personal opinion is that we're doing just fine.

Comment by Templarrr (templarrr) on Losing Faith In Contrarianism · 2024-04-26T09:50:23.256Z · LW · GW

There are 2 topics mixed here.

  1. Existence of the contrarians.
  2. Side effects of their existence.

My own opinion on 1 is that they are necessary in moderation. They are doing the "exploration" part in the "exploration-exploitation dilemma". By the very fact of their existence they allow the society in general to check alternatives and find more optimal solutions to the problems comparing to already known "best practices". It's important to remember that almost everything that we know now started from some contrarian - once it was a well established truth that Monarchy is the best way to rule the people and democrats were dangerous radicals.

On the 2 - it is indeed a problem that contrarian opinions are more interesting on average, but the solution lies not in somehow making them less attractive - but by making more interesting and attractive conformist materials. That's why it is paramount to have highly professional science educators and communicators, not just academics. My own favorites are vlogbrothers (John and Hank Green) in particular and their team in Complexly in general.

Comment by Templarrr (templarrr) on Examples of Highly Counterfactual Discoveries? · 2024-04-24T08:18:38.016Z · LW · GW

Penicillin. Gemini tells me that the antibiotic effects of mold had been noted 30 years earlier, but nobody investigated it as a medicine in all that time.

Gemini is telling you a popular urban legend-level understanding of what happened. The creation of Penicillin as a random event, "by mistake", has at most tangential touch with reality. But it is a great story, so it spread like wildfire. 

In most cases when we read "nobody investigated" it actually means "nobody succeeded yet, so they weren't in a hurry to make it known", which isn't very informative point of data. No one ever succeeds, until they do. And in this case it's not even that - antibiotic properties of some molds were known and applied for centuries before that (well, obviously, before the theory of germs they weren't known as "antibiotic", just that they helped...), the great work of Fleming and later scientists was about finding the particularly effective type of mold and extracting the exact effective chemical as well as finding a way to produce that at scale.

Comment by Templarrr (templarrr) on Templarrr's Shortform · 2024-04-15T08:39:58.222Z · LW · GW

I wonder at which point we'll start seeing LLM-on-a-chip.

One big reason for the current ML/AI systems inefficiencies is just abstraction layering overhead, our pay for the flexibility. We currently run hardware that runs binary calculations that run software that run other software that runs other software (many many layers here, OS/drivers/programming language stacks/NN frameworks etc) that finally runs the part we're actually interested in - bunch of matrix calculations representing the neural network. If we collapse all the unnecessary layers between, burning the calculations directly to hardware, running particular model should be extremely fast and cheap.

Comment by Templarrr (templarrr) on (Rational) Decision-Making In Wartime · 2024-04-10T20:47:51.481Z · LW · GW

Thank you for this summary! It is nice to see someone covering these topics here, I personally rarely have enough nerves left after 2+ years of this hell. Victory to Ukraine and peaceful skies to us all!

Comment by Templarrr (templarrr) on AI #58: Stargate AGI · 2024-04-08T10:00:39.479Z · LW · GW

This is what happens when the min wage is too high

... These automated kiosks existed for years and were used in Mac for years. And in places they were set Mac had better employment, not worse - there was exactly the same number of staff members on the same-ish salary but with decreased load on each member, while, as stated, leading to slightly bigger orders and less awkwardness. 

Comment by Templarrr (templarrr) on AI #58: Stargate AGI · 2024-04-08T09:21:43.789Z · LW · GW

So far I have been highly underwhelmed by what has been done with newly public domain properties

Some can argue it's quite an argument in favor of lowering the length of protected period. We can observe first hand that things going public doesn't cause any problem for previous owners at all and my opinion is that we are cutting it too far. If we want proper balance between ownership and creativity we need to put the threshold somewhere where it is at least a mild inconvenience for the owners, maybe more.

Comment by Templarrr (templarrr) on Cult of equilibrium · 2024-04-07T21:27:42.450Z · LW · GW

Oh, there are absolutely correct places to use the phrase and correct places to benefit from reliable simplicity! My main argument is against mindless usage that I unfortunately witness nowadays a lot. Understanding why and when we need to solve for the equilibrium in evaluation replaced by the simple belief in a rule that we should - always and for everything.

Comment by Templarrr (templarrr) on Can any LLM be represented as an Equation? · 2024-03-14T10:21:32.842Z · LW · GW

Depends on what you include in the definition of LLM. NN itself? Sure, it can. With the caveat of hardware and software limitations - we aren't dealing with EXACT math here, floating points operations rounding, non-deterministic order of completion in parallel computation will also introduce slight differences from run to run even though the underlying math would stay the same.

The system that preprocess information, feeds into the NN and postprocess NN output into readable form? That is trickier, given that these usually involve some form of randomness, otherwise LLM output would be exactly the same, given exactly the same inputs and that generally is frowned upon, not very AI-like behavior. But if the system uses pseudo-random generators for that - those also can be described in math terms, if you know the random generator seed.

If they use non-deterministic source for their randomness - no. But that is rarely required and makes system really difficult to debug, so I doubt it.

Comment by Templarrr (templarrr) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T16:03:34.410Z · LW · GW

Both Gemini and GPT-4 also provide quite interesting answers on the very same prompt.

Comment by Templarrr (templarrr) on Monthly Roundup #14: January 2024 · 2024-01-26T13:23:10.960Z · LW · GW

Adam Grant suggests: “I’m giving you these comments because I have very high expectations for you, and I’m confident you can reach them. I’m trying to coach you. I’m trying to help you.” Then you give them the feedback. Love it.

These are great, but unfortunately only work if the person is ready to accept your authority as a coach. If they don't - they work in an opposite direction.

Comment by Templarrr (templarrr) on Monthly Roundup #14: January 2024 · 2024-01-26T12:26:00.815Z · LW · GW

California Fatburger manager trims hours, eliminates vacation days and raises menu prices in anticipation of $20/hour fast food minimum wage. That seems like a best case...

That's not how any of this works. You don't do that beforehand because there will be 20$/h. If you actually need this - you prepare plans conditional on wages becoming 20$/h. If you do this now, that's because of greed. And because of greed you'll also repeat it when the wages will rise. 

Comment by Templarrr (templarrr) on AI #43: Functional Discoveries · 2023-12-27T15:57:36.359Z · LW · GW

Writers and artists say it’s against the rules to use their copyrighted content to build a competing AI model

The main difference is they say it NOW, after the fact that this happened, and OpenAI said so beforehand. There's long history of bad things happening when trying to retroactively introduce laws and rules.

Comment by Templarrr (templarrr) on Monthly Roundup #13: December 2023 · 2023-12-22T14:39:17.116Z · LW · GW

You need a way to not punish (too harshly or reliably) the shoplifting mom in need, without enabling roving gangs

And the easiest way to do so would be to make it so moms don't need to shoplift - provide things in centralized way free of charge or with minimal prices. But in the USA it will be immediately labeled "socialism" and "socialism is bad".

Comment by Templarrr (templarrr) on Monthly Roundup #12: November 2023 · 2023-11-14T19:22:00.800Z · LW · GW

It really is weird that we don’t think about Russia, and especially the USSR, more in terms of the universal alcoholism.

"Apart from drinking, there is absolutely nothing to do here". Well, they found an alternative - go kill neighbors. Locally it's a crime, but when on the scale of countries...

Comment by Templarrr (templarrr) on Progress links digest, 2023-11-07: Techno-optimism and more · 2023-11-08T14:40:09.159Z · LW · GW

Agricultural land efficiency (via @HumanProgress):

"relative to 1961" label is doing a lot of storytelling here that isn't necessary present in the original raw data

Comment by Templarrr (templarrr) on Progress links digest, 2023-11-07: Techno-optimism and more · 2023-11-08T14:35:06.166Z · LW · GW

Policies are organizational scar tissue. They are codified overreactions to situations that are unlikely to happen again

Oversimplification. Most situations where people point to stats like this they conveniently forget that these situation became unlikely to happen again BECAUSE of the policy. If you use an analogy - use it all the way. Scar tissue is important part of the healing. First instance created an open wound and you don't want to be left with open wound. 

Comment by Templarrr (templarrr) on Progress links digest, 2023-11-07: Techno-optimism and more · 2023-11-08T14:28:47.193Z · LW · GW

technology is predictable if you know the science

The single part of otherwise amazing quote that simply verifiably not true. There are ton of examples when technological use of some scientific principle or discovery was complete surprise for scientists that created/discovered it.

Comment by Templarrr (templarrr) on AI #34: Chipping Away at Chip Exports · 2023-10-23T09:29:38.560Z · LW · GW

If we don’t want China to have access to cutting edge chips, why are we allowing TSMC and Samsung to set up chip manufacturing in China?

Because "we" that don't want Chine to have these and "we" that actually have a say in what TSMC and Samsung is doing are two different "we"s.

Comment by Templarrr (templarrr) on AI #33: Cool New Interpretability Paper · 2023-10-15T18:13:16.621Z · LW · GW

journalists creating controversial images, writing about the images they themselves created, and blaming anyone but themselves for it.

TBH that's perfect summary of a lot of AI safety "research" as well. "Look, I've specifically asked it to shoot me in a foot, I've bypassed and disabled all the guardrails and AI shoot me! AI is a menace!"

Comment by Templarrr (templarrr) on Progress links digest, 2023-10-12: Dyson sphere thermodynamics and a cure for cavities · 2023-10-15T17:05:10.166Z · LW · GW

What happened around the year 2000 that dramatically altered youth culture

(half-serious) People found that if you start the Y axis not from zero - you can make the effect appear as big or as small as you want.

(more serious) Probably a combination of factors - scared after 9/11 society + improvements in personal electronic and internet meant there were simultaneously less desire, less societal push towards and less ways to do listed things.

Comment by Templarrr (templarrr) on AI #30: Dalle-3 and GPT-3.5-Instruct-Turbo · 2023-09-25T08:52:05.672Z · LW · GW

Helsing, a European AI defense startup raising $223 million at a $1.7 billion valuation.

Naming choices... "We also have immortal unstoppable monster in the basement, but ours is on the good side!"

Comment by Templarrr (templarrr) on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-09-18T09:28:47.450Z · LW · GW

Oh, hi, EY, I see you found this :) Single vote with -10 power (2 -> -8) is a lot. Wield that power responsibly.

Comment by Templarrr (templarrr) on AI#28: Watching and Waiting · 2023-09-08T19:13:18.676Z · LW · GW

Roon also lays down the beats

This isn't a link so I can't verify if the source was mentioned, but this isn't his lyrics. It's a third verse from ERB video from 2012 

 

Comment by Templarrr (templarrr) on AI#28: Watching and Waiting · 2023-09-08T18:45:50.777Z · LW · GW

followed by strategies humans haven’t even considered

followed by strategies humans wouldn't even understand because they do not translate well to human language. i.e. they can be translated directly but noone will understand why that works.

Comment by Templarrr (templarrr) on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-28T11:33:49.191Z · LW · GW

Overall I'd love EY to focus on his fiction writing. He has an amazing style and way with words and "I created a mental model and I want to explore it fully and if the world doesn't fit the model it's the problem of the world" type of thinking is extremely beneficial there. It's what all famous writers were good at. His works will be amazing cautionary tales on par with 1984 and Brave New World.

Comment by Templarrr (templarrr) on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-28T09:46:56.574Z · LW · GW

Thank you! We need less "yes men" here and more dissenting voices. The voice counter on this post will be deeply in negative, but that is expected - many people here are exactly in that period you've described as "yourself 2 years ago".

EY is mostly right when he talks about tools to use (all the "better thinking", rationality anti-bias ways), EY is mostly wrong when he talks about his deeply rooted beliefs in topics he doesn't have a lot of experience in. Unfortunately this covers most of the topics he speaks about and that isn't clearly seen due to his vocabulary and the fact that he is genuinely smart person.

Unfortunately^2 it looks like he failed to self-identify his biggest bias that I personally prefer to call "Linus Pauling" effect - when someone is really, really smart (and EY is!) he thinks he's good in everything (even when he simultaneously acknowledge that he isn't - probably the update value of this in his NN could really use a bigger weight!) and wants to spread the "wisdom" of everything, without understanding that IQ+rationality is crappy substitute for experience in the area.

Comment by Templarrr (templarrr) on AI #26: Fine Tuning Time · 2023-08-26T10:08:30.660Z · LW · GW

It is also very much not okay, seriously what the hell.

I 100% agree, it's extremely not ok to violate privacy by going through other people files without consent. Actually deleting them is so far beyond red flag that I think this relationship was doomed long before anything AI picture related happened.

Comment by Templarrr (templarrr) on AI #25: Inflection Point · 2023-08-22T11:40:48.992Z · LW · GW

AI right now is excellent at the second and terrible at the first

Just like 99.9% humanity. 
These are 2 different kinds of "creativity" - you can push the boundaries exploring something outside the distribution of existing works or you can explore within the boundaries that are as "filled" with creations as our solar system with materia. I.e. mostly not.

Limiting creativity to only the first kind and asking everyone to push the boundaries is 

  1. impossible, most people incapable of it.
  2. non-scalable - each new breakthrough can be done only once. There can be only one Picasso, everyone else doing similar work even if they arrived at the same place one day later will already be Picaso-like followers.
  3. irresponsible - some borders are there for a reason and not supposed to be pushed. I'm pretty sure there will be more than zero people who want to explore the "beauty" of people seconds before they die, yet I'm willing to live in the world where this is left unexplored.
Comment by Templarrr (templarrr) on AI #23: Fundamental Problems with RLHF · 2023-08-08T12:38:27.198Z · LW · GW

Tyler Cowen asks GPT-4 if room temperature superconductors (if they existed) would more benefit military offense, or military defense... It is a strange question to be asking ... this is the type of question where human experts are going to outperform.

It's a strange question period. There are no strictly defensive or strictly offensive weapons only defensive and offensive usage. Even anti-aircraft weapons, the most defensively oriented in use right now can be used (sometimes after minor software updates) to attack ground targets. And even the most offensive weapon (e.g. nukes) can be strong defensive deterrent.