Hey everyone - This might be obvious to some of you, but browsers are a large attack surface in modern computing. We do all kinds of things in our web browsers: banking, shopping, pay bills, read news, and watch videos. Some of that should remain private, so it is important to ensure that we are doing as little as possible as users to compromise that privacy. This includes some of the easy things: using HTTPS wherever possible (and making sure the certificate is signed by an appropriate authority) and being careful of links we click on and sites we visit. It is also important to be careful of software (extensions) that enhances the browser for the sake of convenience or to do something that the browser's developers did not envision. These extensions can add some really useful functionality to the browser, like ad-blocking, or debugging if you are developing web sites. However, even inside of the walled garden of the browser's ecosystem, malicious software can slip in. This causes a problem for the browser developer: how do you scale the vetting of these extensions?
Proper vetting of an extension (which is a relatively small piece of software) can take days or months depending on its complexity. If your software or ecosystem is popular enough, you might have many developers excited to get their extension in front of your users. Either you can hire a bunch of people to vet extensions, or you have to automate that vetting process to some extent. The extent to to which your process is automated depends on how robust you can make your process. Unfortunately, because many of these processes are deterministic, a malicious person just has to tweak their code so that it does not trigger any alarms. This is akin to encoding your malware to bypass virus scanners.
We have seen in the past with incidents like XCodeGhost that there are many points in the development process that can be rendered malicious, from the code itself to the compiler injecting code to code being injected after the binary is compiled. Anything that examines apps, extensions, or whatever must be cognizant of these various vectors. Even humans may miss things like compiler introduced malicious code, because we are conditioned to take certain steps in the software development process for granted. As I have said before, you have to trust something along the line or you will never get anything done. Most people do not question the compiler that made their software, because we are trained to look at what the software does (API calls it makes, communication attempts, et cetera), not the process in which it was made. For example, most people do not examine the process by which their food is made. As long as it does not do something bad (like make you sick), no one questions it.
Sometime around the middle of 2016, Mozilla plans to transition to Chrome-style WebExtensions. This is a departure from the XUL and XPCOM-based addons. XUL (XML User Interface Language) is for writing user interfaces inside of Mozilla products that look like web pages. You see these most often in configuration pages for some extensions. XPCOM (Cross Platform Component Object Model) allows a developer to develop their extension in a number of different languages and have it work in Mozilla products. According to Mozilla, they want to move away from XUL / XPCOM for enhanced security and to give the developers a way to develop extensions that work across a number of browsers. Enhanced security comes via signed extensions and automated scanning of extensions uploaded to the Mozilla Add-Ons site for malware. Signed extensions are already enforced in Firefox 41. If you want to host your extension on Mozilla's site (or anywhere else) and have it work with Firefox 43 onward, it has to be signed by Mozilla. This is going to place a large burden on Mozilla to vet extensions quickly, including new extensions and changes to existing extensions.
To make some of this easier on themselves, Mozilla implements automated scanning of extensions submitted to addons.mozilla.org. Unfortunately, this automatic scanning is apparently trivial to bypass. I think this has to do with the underlying assumption that Mozilla makes about the purpose of automated scanning. Automated scanning is not going to catch everything, everytime. It can catch some low hanging fruit, but if an attacker is crafty and dedicated enough they will color in between the lines enough so that the scanner does not notice. This is a classic cat and mouse game that has been going on on the desktop between malware and virus scanners for years. Code signing does have its benefits: it prevents a user from accidentally side-loading a malicious extension, it provides an audit trail, and it helps make sure that the version of the extension you are downloading is what Mozilla vetted.
I believe that Mozilla needs to examine the calls that are allowed through XPCOM and attempt to categorize ones that are uncommon or try to gain direct access to some aspect of the software. If an extension uses these, it could be flagged for manual review. This will not stop crafty developers or developers that turn otherwise legitimate functions into malicious ones by using them a certain way. It will help reject some blatantly obvious malware. For example, it is pretty easy to spot a keylogger in Windows if it makes calls to SetWindowsHookEx(). This was mentioned in the article above. Automated scanning should be used to help weed out the extensions that are obviously malicious so that effort can be better spent on examining extensions that need to be examined a bit more closely.
I do not believe that whitelisting certain authors or certain extensions is a good idea. From Mozilla's perspective, it would be difficult to vet the internal processes of a third party in order to make something like that work. There is no way for Mozilla to know if there is an insider intent on leveraging the goodwill of an extension or author to distribute malware. Also, if a situation like XCodeGhost arises where an author's toolchain is compromised, there may not be a way to detect it until after the extension is published.
WebExtensions' permission manifests will help as well because this will help the user see what an extension is trying to do and what it wants access to. That way, the user can make a more informed decision to install an extension (or not). Code signing will be useful here because the user can have some assurance that the extension that wants the permissions it asks for is the actual extension the developer created.
I hope that Mozilla makes some fundamental changes to the way it handles extension validation. If it can be trivially bypassed, then it is simply not suitable. I do not think a brute force way of scanning extensions (looking at them statically based on rules) is going to work, because rules can be bent just enough to get past enforcement. This is easier said than done, and while I do not have a silver bullet, I think they could step up their game with process changes that will help if implemented correctly.
What do you think about this? How would you deal with this issue? I would like to hear your thoughts. Thanks for reading!