Entries Tagged "security engineering"

Page 2 of 14

Securing Internet Videoconferencing Apps: Zoom and Others

The NSA just published a survey of video conferencing apps. So did Mozilla.

Zoom is on the good list, with some caveats. The company has done a lot of work addressing previous security concerns. It still has a bit to go on end-to-end encryption. Matthew Green looked at this. Zoom does offer end-to-end encryption if 1) everyone is using a Zoom app, and not logging in to the meeting using a webpage, and 2) the meeting is not being recorded in the cloud. That’s pretty good, but the real worry is where the encryption keys are generated and stored. According to Citizen Lab, the company generates them.

The Zoom transport protocol adds Zoom’s own encryption scheme to RTP in an unusual way. By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers. Zoom’s encryption and decryption use AES in ECB mode, which is well-understood to be a bad idea, because this mode of encryption preserves patterns in the input.

The algorithm part was just fixed:

AES 256-bit GCM encryption: Zoom is upgrading to the AES 256-bit GCM encryption standard, which offers increased protection of your meeting data in transit and resistance against tampering. This provides confidentiality and integrity assurances on your Zoom Meeting, Zoom Video Webinar, and Zoom Phone data. Zoom 5.0, which is slated for release within the week, supports GCM encryption, and this standard will take effect once all accounts are enabled with GCM. System-wide account enablement will take place on May 30.

There is nothing in Zoom’s latest announcement about key management. So: while the company has done a really good job improving the security and privacy of their platform, there seems to be just one step remaining to fully encrypt the sessions.

The other thing I want Zoom to do is to make the security options necessary to prevent Zoombombing to be made available to users of the free version of that platform. Forcing users to pay for security isn’t a viable option right now.

Finally — I use Zoom all the time. I finished my Harvard class using Zoom; it’s the university standard. I am having Inrupt company meetings on Zoom. I am having professional and personal conferences on Zoom. It’s what everyone has, and the features are really good.

Posted on April 30, 2020 at 10:24 AMView Comments

Vulnerability Finding Using Machine Learning

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Posted on April 20, 2020 at 6:22 AMView Comments

Microsoft Buys Corp.com

A few months ago, Brian Krebs told the story of the domain corp.com, and how it is basically a security nightmare:

At issue is a problem known as “namespace collision,” a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.

Windows computers on an internal corporate network validate other things on that network using a Microsoft innovation called Active Directory, which is the umbrella term for a broad range of identity-related services in Windows environments. A core part of the way these things find each other involves a Windows feature called “DNS name devolution,” which is a kind of network shorthand that makes it easier to find other computers or servers without having to specify a full, legitimate domain name for those resources.

For instance, if a company runs an internal network with the name internalnetwork.example.com, and an employee on that network wishes to access a shared drive called “drive1,” there’s no need to type “drive1.internalnetwork.example.com” into Windows Explorer; typing “\\drive1\” alone will suffice, and Windows takes care of the rest.

But things can get far trickier with an internal Windows domain that does not map back to a second-level domain the organization actually owns and controls. And unfortunately, in early versions of Windows that supported Active Directory — Windows 2000 Server, for example — the default or example Active Directory path was given as “corp,” and many companies apparently adopted this setting without modifying it to include a domain they controlled.

Compounding things further, some companies then went on to build (and/or assimilate) vast networks of networks on top of this erroneous setting.

Now, none of this was much of a security concern back in the day when it was impractical for employees to lug their bulky desktop computers and monitors outside of the corporate network. But what happens when an employee working at a company with an Active Directory network path called “corp” takes a company laptop to the local Starbucks?

Chances are good that at least some resources on the employee’s laptop will still try to access that internal “corp” domain. And because of the way DNS name devolution works on Windows, that company laptop online via the Starbucks wireless connection is likely to then seek those same resources at “corp.com.”

In practical terms, this means that whoever controls corp.com can passively intercept private communications from hundreds of thousands of computers that end up being taken outside of a corporate environment which uses this “corp” designation for its Active Directory domain.

Microsoft just bought it, so it wouldn’t fall into the hands of any bad actors:

In a written statement, Microsoft said it acquired the domain to protect its customers.

“To help in keeping systems protected we encourage customers to practice safe security habits when planning for internal domain and network names,” the statement reads. “We released a security advisory in June of 2009 and a security update that helps keep customers safe. In our ongoing commitment to customer security, we also acquired the Corp.com domain.”

Posted on April 9, 2020 at 6:45 AMView Comments

The Insecurity of WordPress and Apache Struts

Interesting data:

A study that analyzed all the vulnerability disclosures between 2010 and 2019 found that around 55% of all the security bugs that have been weaponized and exploited in the wild were for two major application frameworks, namely WordPress and Apache Struts.

The Drupal content management system ranked third, followed by Ruby on Rails and Laravel, according to a report published this week by risk analysis firm RiskSense.

The full report is here.

Posted on March 18, 2020 at 7:45 AMView Comments

Firefox Enables DNS over HTTPS

This is good news:

Whenever you visit a website — even if it’s HTTPS enabled — the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can’t be intercepted or hijacked in order to send a user to a malicious site.

[…]

But the move is not without controversy. Last year, an internet industry group branded Mozilla an “internet villain” for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.

The move to enable DoH by default will no doubt face resistance, but browser makers have argued it’s not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH — with others, like Chrome, Edge, and Opera — quickly following suit.

I think DoH is a great idea, and long overdue.

Slashdot thread. Tech details here. And here’s a good summary of the criticisms.

Posted on February 25, 2020 at 9:15 AMView Comments

USB Cable Kill Switch for Laptops

BusKill is designed to wipe your laptop (Linux only) if it is snatched from you in a public place:

The idea is to connect the BusKill cable to your Linux laptop on one end, and to your belt, on the other end. When someone yanks your laptop from your lap or table, the USB cable disconnects from the laptop and triggers a udev script [1, 2, 3] that executes a series of preset operations.

These can be something as simple as activating your screensaver or shutting down your device (forcing the thief to bypass your laptop’s authentication mechanism before accessing any data), but the script can also be configured to wipe the device or delete certain folders (to prevent thieves from retrieving any sensitive data or accessing secure business backends).

Clever idea, but I — and my guess is most people — would be much more likely to stand up from the table, forgetting that the cable was attached, and yanking it out. My problem with pretty much all systems like this is the likelihood of false alarms.

Slashdot article.

EDITED TO ADD (1/14): There are Bluetooth devices that will automatically encrypt a laptop when the device isn’t in proximity. That’s a much better interface than a cable.

Posted on January 7, 2020 at 6:03 AMView Comments

Lousy IoT Security

DTEN makes smart screens and whiteboards for videoconferencing systems. Forescout found that their security is terrible:

In total, our researchers discovered five vulnerabilities of four different kinds:

  • Data exposure: PDF files of shared whiteboards (e.g. meeting notes) and other sensitive files (e.g., OTA — over-the-air updates) were stored in a publicly accessible AWS S3 bucket that also lacked TLS encryption (CVE-2019-16270, CVE-2019-16274).
  • Unauthenticated web server: a web server running Android OS on port 8080 discloses all whiteboards stored locally on the device (CVE-2019-16271).
  • Arbitrary code execution: unauthenticated root shell access through Android Debug Bridge (ADB) leads to arbitrary code execution and system administration (CVE-2019-16273).
  • Access to Factory Settings: provides full administrative access and thus a covert ability to capture Windows host data from Android, including the Zoom meeting content (audio, video, screenshare) (CVE-2019-16272).

These aren’t subtle vulnerabilities. These are stupid design decisions made by engineers who had no idea how to create a secure system. And this, in a nutshell, is the problem with the Internet of Things.

From a Wired article:

One issue that jumped out at the researchers: The DTEN system stored notes and annotations written through the whiteboard feature in an Amazon Web Services bucket that was exposed on the open internet. This means that customers could have accessed PDFs of each others’ slides, screenshots, and notes just by changing the numbers in the URL they used to view their own. Or anyone could have remotely nabbed the entire trove of customers’ data. Additionally, DTEN hadn’t set up HTTPS web encryption on the customer web server to protect connections from prying eyes. DTEN fixed both of these issues on October 7. A few weeks later, the company also fixed a similar whiteboard PDF access issue that would have allowed anyone on a company’s network to access all of its stored whiteboard data.

[…]

The researchers also discovered two ways that an attacker on the same network as DTEN devices could manipulate the video conferencing units to monitor all video and audio feeds and, in one case, to take full control. DTEN hardware runs Android primarily, but uses Microsoft Windows for Zoom. The researchers found that they can access a development tool known as “Android Debug Bridge,” either wirelessly or through USB ports or ethernet, to take over a unit. The other bug also relates to exposed Android factory settings. The researchers note that attempting to implement both operating systems creates more opportunities for misconfigurations and exposure. DTEN says that it will push patches for both bugs by the end of the year.

Boing Boing article.

Posted on December 19, 2019 at 6:31 AMView Comments

Security Vulnerabilities in the RCS Texting Protocol

Interesting research:

SRLabs founder Karsten Nohl, a researcher with a track record of exposing security flaws in telephony systems, argues that RCS is in many ways no better than SS7, the decades-old phone system carriers still used for calling and texting, which has long been known to be vulnerable to interception and spoofing attacks. While using end-to-end encrypted internet-based tools like iMessage and WhatsApp obviates many of those of SS7 issues, Nohl says that flawed implementations of RCS make it not much safer than the SMS system it hopes to replace.

Posted on December 16, 2019 at 6:00 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.