Zoom's security is not that bad

post by lc · 2020-04-05T11:31:42.755Z · score: 5 (8 votes) · LW · GW · 16 comments

Disclaimer: just an intern/college student, wrote this at 4:00 A.M.


In the past two days Zoom has been hit with a dozen or so articles for its apparently poor security and steadfast allegiance to the Chinese government. The root of this outcry, or at least the earliest/most specific/most technical articles I can find, appears to be this report by citizen lab and this one by a security researcher. Combing through these and the above, I really only see two real issues cited from the last few months, one of which was patched, and none of which seem particularly severe in terms of real-world impact.

The first problem was a few relatively trivially exploited local privilege escalation vulnerabilities for their MacOS client that were patched shortly after they were published about on a security researcher's blog. This is probably the lowest severity vulnerability that can be written up without being seen as desperate or cheeky by most people in security. Local privilege escalation on an end users' laptop is a vulnerability, yes, but it is almost completely useless. As pervasively cases to the contrary are reported in the media, most attackers with ongoing remote code execution on your laptop are more interested in your credit cards and monero mining capabilities than your microphone and camera.

The second, which to my knowledge has not been fixed, is Zoom's use of ECB mode for encrypting their video communications. This seems like much more of a show of incompetence than the former (I can't remember any other company being called out for this practice in my lifetime), but these cryptographic problems are the kind of "academic" bug that is easier to spot than to exploit. Despite what you were told in your C.S. class, using the ECB cipher mode is not a security vulnerability, if we're supposed to designate security vulnerabilities based on their demonstrated risk to security. An attacker with network access to your zoom traffic can tell, within the same call, when the 128-bit encrypted blocks have matching input blocks. That's not a POC. POC || GTFO.

That's it. Those are the only real security issues I could find, at least within the last year. There was a pretty egregious one regarding a MacOS (something going wrong with that team?) remotely exploitable vulnerability via an installed web server that didn't grant code execution but did allow you to bring someone into a call. It was reported over a year ago, and it doesn't seem to have been exploited in the wild. There was the thing of them lying about providing end to end encrypted communications on their proprietary conferencing app, and while that reflects poorly on Zoom in a character sense, I don't think anyone who actually needs the security of end-to-end-encryption is going to rely on Zoom to speak to other people, and I don't think anyone who thought about it for ten seconds thought they possessed this feature anyways. And then there was the sudden braindamaged outcry about them not using AES-256 instead of AES-128, which is like being mad at someone for using a 32 character random password instead of a 64 character random password, and is a comment that should immediately make you lose any respect for the security awareness of the person saying it.

In addition there were some default chatroom changes that have gotten fixes in the last twenty four hours due to the ongoing dogpiling. One of these was caused by a bug where if you leave a zoom call open to the public, members of the public can join the zoom call. The press has given this practice its own name, "Zoom bombing", and keep calling the people that do it hackers. No one is hacking into these goddamn phone calls. You can scream "security by default" and "users are the weakest link" until you're asking the federal government for respirators, but there's also selection effect to consider in which people who display this level of incompetence in protecting their data also tend to value it less. I have yet to see reports of trade secrets leaked by people who fail to password protect their zoom session - only a dozen or so repeat stories of trolls spamming the N-word, which is a tragedy because such foul language never happens elsewhere on the internet.

The biggest problem the media seems to have with Zoom though, the one that makes me froth at the mouth in its hysterical stupidity, is its ties to China. Some of these news sites - which I cannot look at without disabling 4/5 of my privacy extensions - news sites which are, in terms of bandwidth or requests, 90% analytics/ads/malware and 10% content - are apparently concerned that their weekly standups are being sent along wires that flow through Chinese borders. Let me tell you personally: China has your phone calls. More importantly, American spy agencies have your internet browsing habits and your phone calls. Through the magic of the internet, with or without cooperation from the companies themselves, any computation you do not take great, impractical pain to secure from nation state adversaries is going to end up in the hands of nation state adversaries. Remarkably, this includes China, even when the company is not physically located in China or does not hire Chinese employees. The agency you should be most concerned about spying on you is the one that polices the country you live in. Especially if you literally live in the worlds' foremost surveillance state.

I'm not saying that Zoom hasn't probably handled application security poorly, or that China spying on you isn't bad, or that "I wouldn't run things differently if I were in charge", etc. etc. But security is a trade-off, and Zoom is a startup that through this point has quite reasonably focused on growth and features over hiring the best security/bughunting staff. Zoom doesn't run a cryptocurrency exchange. They don't build internet connected pacemakers. They're a chat app that allows you to talk to grandma over the internet instead of giving her a respiratory disease. The class action lawsuits being thrown this company for giving data to Facebook (curiously driven, I assume, partly by people who regularly voluntarily give data to Facebook) strikes me as panic-driven and honestly xenophobic. Somehow I doubt all of the people suddenly concerned with internet privacy will quit Windows, Gmail, or Snapchat, but of course, when the prospect is a Chinese company having your data, we all have to make sacrifices.

16 comments

Comments sorted by top scores.

comment by ChristianKl · 2020-04-05T12:30:50.912Z · score: 18 (7 votes) · LW(p) · GW(p)
I don't think anyone who actually needs the security of end-to-end-encryption is going to rely on Zoom to speak to other people

Why? Politicans and business people who have meetings where high stakes decisions are made are not always good at security awareness.

A person like Hilary Clinton did admit on not secure electronic communication that the government of Saudi Arabia funded ISIS even when that's the kind of secret that she likely doesn't want to be widely known.

comment by lc · 2020-04-05T21:11:13.819Z · score: 3 (2 votes) · LW(p) · GW(p)

Those politicians/military personnel also tend to have pretty strict protocols on how to handle classified information, deviation of which is already illegal. Most people with access to classified material like that can go to jail for putting their iPhone chargers into government computers, let alone talking about it over the internet on their home laptops. It's not that it couldn't happen, it's that I can't believe there are many people on the margin who know what end to end encryption is, need it, hear Zoom has it, and then decide to use Zoom instead of some other clearnet alternative that would have saved them.

comment by ChristianKl · 2020-04-12T17:34:20.314Z · score: 2 (1 votes) · LW(p) · GW(p)

I haven't spoken about military people who do usually have strict protocols. The same doesn't go for politicians.

I don't think there's a single congressman or congressional staffer that went to jail for discussing classified material on not encrypted channels.

Journalists would be another class of people who often would need good opsec but don't really have it. There are likely plenty of newsrooms that are now discussing using Zoom and for them this is a good level to have those discussions.

comment by edjusted (edjusted-1) · 2020-04-05T21:55:40.398Z · score: 5 (3 votes) · LW(p) · GW(p)

https://www.schneier.com/blog/archives/2020/04/security_and_pr_1.html

This might be the best one-sentence summary: "Zoom's security is at best sloppy, and malicious at worst."

And their reactions to past security-related issues have a definite "we don't really care" attitude, though that seems to have improved recently.

And I agree with your point that they are "focused on growth and features over hiring the best security/bughunting staff". That would actually seem to give further credence to their security being "sloppy at best".

As to whether or not it's "not that bad", I guess that depends on what your needs are and what "not that bad" means. I would argue that most "web companies" *should* be held liable to at least a minimum level of security/privacy regardless of who their intended audience is. But I don't have any good answers as to what that means.

comment by lc · 2020-04-05T22:35:48.528Z · score: 1 (1 votes) · LW(p) · GW(p)

I guess there's a difference between "sloppy" and "Zoom is malware", which is the official position of security twitter and some parts of the media as of today. As bad as they are, I'm afraid none of the examples of bugs in Bruce Schneier's article look remarkably different than what you can find reading the weekly security reports on hackerone.com.

comment by TimothyK · 2020-04-08T02:47:52.710Z · score: 4 (2 votes) · LW(p) · GW(p)

Excellent article, really helped put a lot of the 'fear-mongering' news articles in line. I still think organizations should avoid using Zoom if there are other alternatives easily available, but to be fair I havn't done any feature comparisons.

There was another moderately serious bug discovered late March, patched April 2nd.

https://www.bleepingcomputer.com/news/security/zoom-lets-attackers-steal-windows-credentials-run-programs-via-unc-links/

comment by Jonathan_Graehl · 2020-04-05T21:30:52.818Z · score: 4 (2 votes) · LW(p) · GW(p)

Are you aware that Chinese nationals worldwide are often asked to collect intel or perform ops for CCP? Do you think the disproportionate stories of industrial espionage are just disproportionate reporting? Are you aware that CCP requires its citizen companies to routinely violate users' privacy?

Why does it make you angry that xenophobic tendencies contribute to skepticism of reliance on Chinese software/servers? How is that at all relevant to a rational assessment?

comment by lc · 2020-04-05T21:38:53.577Z · score: 3 (2 votes) · LW(p) · GW(p)

I'm aware of all of those things. My point is, aside from industrial espionage, all of those things are true of American spy agencies as well, and none of them are signfiicantly mitigated by using a company that does not possess Chinese servers. Perhaps if you're handling trade secrets, you may want to consider using something like Session, Keybase, or Signal. But clearly, it's not "rational" to switch to Microsoft Teams to keep your high school math sessions safe from Chinese eyes, and that's what makes me frustrated people are switching to an inferior product.

comment by TimothyK · 2020-04-08T02:47:52.710Z · score: 3 (2 votes) · LW(p) · GW(p)

I hold several different beliefs, and am curious as to what motivates your above statements:

1: Why do you believe that American spy agencies collect Intel or perform Ops using commercial software to a similar level as the CCP? The level of governmental power is extremely different, even if you believe the governmental 'morals' are equal.

2: I've always heard that using servers always comes with the risk of data being ready by whichever government owns the data center. Do you believe that to not be the case? Or are you simply of the belief that every government has access to the data?

3: I see it as 'rational' to switch to Teams for your math sessions only in the same sense that using a VPN for legitimate web browsing is rational. By obfuscating your data, you are making it harder for potentially malicious actors to make and refine algorithms for mass-population manipulation. But that's a whole massive topic by itself, probably not best to get into it here.

comment by lc · 2020-04-16T22:06:51.431Z · score: 3 (2 votes) · LW(p) · GW(p)

1. Research the 2013 global surveillance disclosures by Edward Snowden. The NSA has been hacking and monitoring the users of basically every large American and foreign technology company for decades.

2. Yes, using servers in a different country mitigates the physical threat of that country's police raiding data centers and putting malware on disk drives. It does not prevent a government from hacking remote access to Zoom's servers, which is far more convenient, quiet, and effective for large intelligence organizations.

3. Just by going on how much data Microsoft collects from average Windows users, this doesn't seem to be a strong effort for that cause.

comment by TimothyK · 2020-04-17T20:54:52.865Z · score: 1 (1 votes) · LW(p) · GW(p)

1: There's 2 differences I see; I'd categorize it more as 'collecting' than 'monitoring,' and despite the many arms of the NSA, I'd bet the CCP is far worse. A way to measure this is network latency: traffic leaving China is noticeably slower, due to the Great Firewall and the amount of filtering CCP agencies do to all data. Traffic leaving the US encounters 0 or minimal latency; so if it's being monitored, it's not real-time. I actually have worked with a person who had access to the NSA database during it's pre-Snowden days. According to him, there was far more data being collected then was being used, for legal reasons and practical ones. Legally, it was not considered monitoring US persons until the traffic was unencrypted; so while they might have a phone call recorded, it's not Illegal until they decrypt it. (yes, I know, this makes enforcement entirely an internal measure)

2: The most convenient, quiet, and effective way of getting access is legitimate credentials. If you can steal them, that's great, but if you can send a police officer to tell the company to make you creds, that's way easier. I agree with you as far as high-value targets go; you do lose some secrecy if you have to bring the server owners on board. But for the average user, I'd guess it's more efficient to save your 'hackers' for more useful stuff, and use bureaucrats as much as possible in their place.

3: VPN usage is growing, but as you pointed out, data-collection is growing too, at what I see as a far faster rate. I know a few optimistic people, but I'm pessimistic, I think these measures will just delay the complete loss of privacy (and therefore the 'Hari Seldon-ing' of big businesses).

comment by lc · 2020-04-24T07:12:09.515Z · score: 1 (1 votes) · LW(p) · GW(p)

1.

>I'd categorize it more as 'collecting' than 'monitoring,'

>China filters outside traffic, and the U.S. doesn't, so the U.S. must not be collecting that data for later analysis.

>I had a friend who worked for the NSA who told me it was alright. I suppose that means it was alright.

You're trying to cast ambiguity on things that are already wide public knowledge. The NSA collects and *analyzes* this data. That the U.S. doesn't block Chinese websites on an ISP level is entirely irrelevant. It makes no technical sense to halt a user's internet connection in real time while you analyze it for terrorist activity, when you can concurrently send it off to an NSA server and get the same analysis seconds later. The Great Firewall is analyzing ISP traffic so that it can find its destination and drop it if it's on a blacklist. These are two completely different technical and political goals.

There is always going to be far more data than is being used when you collect data on the scale the NSA does. While I generally don't think you shouldn't take this guys word at face value, this fact does not preclude any level of surveillance or misconduct on the NSA's part. NSA employees could be sitting in their office chairs nine hours of the day looking at nudes or emails of journalists and "most data would remain unused", or so your coworker might report.

2. With regards to the ones I'm familiar, you are, in practice, incorrect, or at least most police/spy agencies currently disagree with your cost benefit analysis. This is like saying that it's better to try to collude with the bartender at a place where the Mafia hangs out than it is to just plant wiretaps when everyone has for the night. The NSA and the MSS don't *want* people who work at a technology company to know how and where they are collecting data. It unnecessarily compromises the entire point of collecting such data in the first place. The average user is nabbed in the process of clandestinely hacking "high value targets" like Google.

comment by Pattern · 2020-04-06T21:44:17.955Z · score: 2 (1 votes) · LW(p) · GW(p)

What is Session?

comment by lc · 2020-04-16T21:59:51.496Z · score: 3 (2 votes) · LW(p) · GW(p)

In my opinion, Session is by far the best architecturally designed encrypted messaging app. It's very new, and probably has some RCE's hidden in there, but every other active messaging app I've come across has critical OPSEC flaws that make it inherently inferior. Just ignore the cryptocurrency stuff if you want, though I think it could help with a lot of problems traditional anonymizing networks have; the important part is that it allows for anonymized, *decentralized* communication, and isn't coded in C.

comment by Pattern · 2020-04-16T22:30:35.720Z · score: 2 (1 votes) · LW(p) · GW(p)

Thanks.

and isn't coded in C.

Is C insecure, or just hard to read?

comment by lc · 2020-04-16T22:45:11.324Z · score: 3 (2 votes) · LW(p) · GW(p)

C is a very old programming language that, while very close to the hardware and good for programming something that needs to run very very quickly, has very few guardrails to prevent really nasty memory corruption exploits. There are lots of footguns when programming in C that basically ensure that a program with enough code, no matter how simple, has some ungodly race condition or heap overflow that allows remote attackers to take control of your entire computer. Almost everything that doesn't run on a toaster should be programmed in something else, but people still make the decision to use this language.