> To prevent these issues from occurring again in the future, Google has been pushing a "Manifest V3", which (among other, more controversial requirements) bans the practice of executing any JavaScript loaded from a remote server
It's a bit ironic that this security issue doesn't exist in Firefox [1] when they actually implement features promoted by Google in the Manifest V3. Why doesn't Chrome already block remote scripts?
I really wish we could get something like this for local, desktop apps. I'm super tired of most every new desktop "app" being just an electron runtime (which, of course, can read and write all over my filesystem) that runs now-unsandboxed js both from its own bundle, as well as downloaded willy-nilly from the internet.
Let's give engineers more credit, here. The majority of desktop applications have the same capabilities, including writing all over your filesystem and executing code from remote sources. Electron gives those same capabilities to apps written in JavaScript.
> Let's give engineers more credit, here. The majority of desktop applications have the same capabilities, including writing all over your filesystem and executing code from remote sources.
Capability, sure.
But more often than not, it is Electron-based applications which are happy to include remote scripts... Because that's often as easy or easier than vendoring your dependencies and bundling them.
In practice, Electron apps probably poses a bigger security risk, at least for that threat-vector.
For a while you could get persistent native code execution (effectively root) in Slack, Discord and Microsoft Teams by dropping a text file in a specific spot in AppData/homedir/etc. Didn't even need to be chmodded or anything, just drop the file there and the electron apps would run it at startup with full permissions. Really dramatically lowered the difficulty of running code on an end-user's machine, especially since that directory is writable by virtually anything, including an electron app you've XSS'd.
I think all 3 have been patched to prevent that particular attack, but it's astonishing to me that electron apps don't seem to use any form of code signing.
my HackerOne reports were all Out Of Scope, naturally, until parts of the attack got assigned CVEs later and someone else got the bug bounty :) At least it's fixed!
Yes and no. Chrome does a ton of stuff to prevent foreign processes from messing with it, so exfiltrating your gmail session tokens or emails via a native app is very difficult. In a chrome extension? Laughably easy, tons of extensions have read/write access to all your gmail tabs because Google does nothing to warn users about how dangerous that permission is. The extension APIs are poorly designed such that you need that wildcard access for all sorts of things, too.
They did eventually try to shut the barn door after the horse bolted by changing the wildcard permissions system to have manual domain filtering but I never saw them actually shut it off by default (doing so would break existing extensions). Maybe they will in Manifest v3 since they're perfectly happy breaking ad blockers.
The restriction worth for security, but also pain for some extensions. For example, A Google Translator add-on on Firefox can't be regitered to Mozilla's store.
Indeed, it is not possible to use Google translate without running a remote script. However, this is not justified from a technical point of view as far as I know.
They don't have any trouble because they don't run remote scripts. This problem is specific to the implementation (and the TOS) of the Google translate API.
Um, question, does remote code (as defined by the Webstore and Mozilla) refer to JS that is downloaded and eval()ed in the background and content script contexts, or <script> tags that are injected into the page, or both?
Ah, thought it was allowed with 'unsafe-eval' in extension CSP. Never had a need for it. Remote code refers to injecting <script> tags into the page then?
When running as a WebExtension you have a higher set of permissions than a regular script (you can to some degree control the browser, after all), so certain parts of JavaScript is off-limits.
These extensions don't need to run remote scripts to work properly, and they work perfectly in Firefox. Typically, "legitimate" extensions use remote scripts to dynamically insert trackers or ads.
As if supply chain attacks, ransomwares and all the zero click wormable vulnerabilities we receive every other day were not enough.
I like gardening my small personal home server, services and backups but there is no reason debian packages could not be subject to the same supply chain "evil maid" or upstream "evil new maintainer". Everything being done in the open and reviewed makes it less probable, but not impossible. Sigh.
As a company, "risk" is mostly insurance. As an individual, it’s anxiety.
> but there is no reason debian packages could not be subject to the same supply chain "evil maid" or upstream "evil new maintainer"
I'm always suspicious of the number of blogspam generic linux help advice sites that get you to install some random ppa complete with a nifty little code snippet that automatically installs certs and updates your sources.list! How handy!
You can make it so that the server returns benevolent looking code when auditing it with just "curl URL", but return malware when curl is directly piped to bash.
I love using sites like that for my personal computer/projects, but I never copy and paste code snippets or install PPAs on work machines or computers with magic internet money on them
Yes, that was very sad. I really didn't think they would sink so low. I've always just used Raspbian on my Pi's without giving it much thought. Remotely activated Microsoft spyware was nowhere on my radar.
Every package manage is a horrible vulnerability (along with being a useful tool). When you package a webapp:
- Debian
- Maven
- NPM
- On the dev machines: Brew, Chrome extensions...
Aren’t they very easy to exploit, for a mildly dedicated actor? I don’t see any decent solution to this. Any line could contain a wget | bash...
The future is sandboxed apps with flatpak. Who cares if Spotify is malware when it can't access anything.
Wayland, SELinux, Flatpak, PipeWire. These will save us or at least reduce the problem of evil maintainers.
This model has been tried and proven for over a decade on mobile. What we call malware on mobile is simply the app doing bad things with what you enter in to the app itself and not the desktop class "steals all your data and then encrypts it"
Except that often the sandboxed apps are also a nuisance to work with. They don't pick up themes from the desktop, keyboard shortcuts don't work anymore (if you have e.g. set some global shortcuts), exchanging data with other programs can be a pain as well...
They are only a nuisance right now. Almost all of the issues you encounter are minor implementation problems and not fundamental issues with sandboxing or flatpak. Things are getting better.
I'd like to agree, but as always, the problem is the security/ease-of-use tradeoff. SELinux can be a nightmare to deal with, particularly if you're compiling an application from source and you want to take advantage of it (and even if you aren't, it can lead to mysterious failures). I managed to make Asterisk work in SELinux, until I tried to add a Bluetooth channel module to the mix. At that point, I was backed into a corner; there just didn't seem to be a way for me to let Asterisk access Bluetooth with SELinux running.
So you count on SELinux. Who maintains SELinux policies for you, so that it knows what is allowed to a program and what isn't? It's people just like other software maintainets.
There is no way around having to trust somebody else.
Maintainers do not review code. At best they test it to make sure it works. The Debian maintainers let in a timebomb borderline malware in to the xscreensaver package without them noticing it.
Maintainers do not have the time or ability to check for even intentional malware let alone security bugs.
Debian has been working towards reproducible builds [1].
What this means is that the package maintainer is unable to alter the binary/package outside of the publicly available sources.
In the case of the web extension, the maintainer could build whatever software they wanted, not necessarily the source you see on github or elsewhere. A reproducible build system would prevent this type of attack (that I'm talking about here).
Shoutout to people like the author who look at various open-source apps to spot malicious activity. Open-source libraries are a great contribution to software, but it can also be dangerous if malicious code is sneaked into a library used by safety-critical software. And web extensions are safety-critical software.
Terrible headline makes it sound like the open-source-ness made the attack possible to occur instead of possible to be discovered.
"Of course, as the vast majority of the users of The Great Suspender were not interested in its open-source nature, few of them noticed until October, when the new maintainer made a perfectly ordinary release on the Chrome Web Store. Well, perfectly ordinary except for the minor details that the release did not match the contents of the Git repository, was not tagged on GitHub, and lacked a changelog."
I want to clarify a few points in the article re. uBlock Origin, they might be seen as minor details but to me they matter.
> Raymond Hill, after (you guessed it) he transferred ownership of uBlock to a new, untrustworthy maintainer
Only the GitHub repo was transferred, I never transferred the extension in the Chrome Store, and Opera Store.[1]
The Firefox version was published by a contributor, and he chose to stay with the new maintainer, and as a result I created a new publication for uBlock Origin in Firefox store.
All this was nearly 6 years ago.
> Aljoudi began reducing blocking features, eventually choosing to permit certain ads via the "acceptable ads" program
"Acceptable Ads" was added to "uBlock" in February 2019 by the new owner, BetaFish Inc. (maker of AdBlock).[2]
BetaFish Inc. was itself sold circa October 2015 to an (still) anonymous buyer.[3]
> Hill created a fork, now called uBlock Origin, which reverted the changes
I didn't revert any change, I forked while I was still controlling the GitHub repo.[4] If you look at the project timeline, it shows that I have been in charge since the first commit in June 2014.[5]
> Nano Defender and its 200,000+ users, upon their recent acquisition, immediately began having their personal data mined.
Note that the malware did not require the blocking ability of the webRequest API to collect the data, it needed only the observational ability, which is not deprecated by Manifest v3.[6]
Thank you for your work on ublock origin! I mean it, it's reflecting an important part and protecting a lot of internet users.
I wanted to ask whether you tried to go against the uBlock maintainer through legal actions?
DMCA takedown comes to mind as well as registering the uBlock trademark and force him to change the name...or revoking rights to him specifically for new updates/changes of the codebase.
uBlock meanwhile is just as scammy as AdBlock and AdBlock Plus, which both are owned by eyeo GmbH (and their acceptable ads program which they abuse to force websites to enter their program, while getting 30% of ad revenue for the "allowance").
And I would hate to see uBlock pulling uB0 through the mud with its name.
Thank you for the productive response! I am definitely not the author of the article. I have no idea who they are, the similarities between our usernames is entirely coincidental.
I wasn't aware that you hadn't transferred the rights to the Web Store. I suppose the article wasn't clear enough on the timeline for the uBlock Origin swap: it was included because it was another case of maintainership change gone wrong, and it was closely related to the Nano Defender situation that was virtually identical to The Great Suspender.
I was aware that acceptable ads were added much later than the change in ownership: however, I thought the removal of per-site switches would be less relevant to the modern situation. On review, it does seem to imply that the impetus for the fork is that change: my apologies.
I trust that you didn't revert any changes, but Git doesn't necessarily preserve that information properly. Some git commands ('git reset --hard') remove any changes from history, as well as not creating a log of such changes. Much of what I could find around the change seemed to imply that you had reverted them, as opposed to simply never getting them: the difference is mostly academic, in my opinion.
Thanks for putting that clarification here: it wouldn't have fit well in the article, but it is worth mentioning. Manifest V3's new restriction on remote code is the main relevant security addition, and I am not a fan of how they bundle that in with the other changes. That restriction would make it a lot harder for these sorts of changes to fly under the radar. Nano Defender's malicious changes were quickly discovered: The Great Suspender flew under the radar for months..
> On review, it does seem to imply that the impetus for the fork is that change: my apologies.
No worry. I forked the repo at the same time I transferred ownership. The reason was simply that I wanted to get back at being able to mostly work on the code base, as the issue tracker had become a burden taking most of the time I allocated to the project, and I found that @chrisaljoudi was good at handling opened issues.
The whole idea of me reviewing software before I install it breaks when I can’t easily install it from the source I reviewed.
This reminds me of why GitHub should create it’s own App Store (yup, I’m gonna beat this drum):
I can navigate the source ask questions about parts that make me suspicious, and then install a prebuilt binary with confidence that the binary was the sum of what I reviewed.
Or allowing local builds with including custom patches. Giving people agency over the machines that run their lives is way too radical to be popular though.
>Does this update add new network requests, for example.
this is fundamentally impossible to do because you can smuggle data out by injecting content scripts to a given page and make requests in that context.
>He transferred the GitHub repository and the Web Store rights, announcing the change in a GitHub issue that said nothing about the identity of the new maintainer. The announcement even made a concerning mention of a purchase, which raises the question of who would pay money for a free extension, and why.
Wow, this is clearly the fault of the maintainer. Too much power in one hands. He literally had the lives of so many people at stake and he sold them out for whatever it was worth. I feel that although there's another angle of maintainers getting paid for their work, this is totally not in line with ethics of how open source should be done.
That and they inserted random javascript from a website that can change at any moment, created a shady domain name mascaraing as affiliated with a different analytics to funnel analytics to (and who knows what else but i think passwords and logins were pilfered). they also gave themselves webrequest permissions for no reason which alters the threat profile too.
I've wondered about whether something like this could happen to the "Bypass Paywalls" Chrome extension [1]. However what makes me feel more comfortable (and please correct me if I'm wrong) is that in order to use the extension, you need to save a copy of it locally and then drag that over to Chrome to install it. If I delete the local version of the extension then it no longer works. Assuming that there were no malware at the time of downloading the extension from GitHub, does this mean that no one can "push" malware code to my local version of the extension or "push" anything to GitHub that could interact with my local version in a malicious way?
That understanding is correct. In fact, that type of technique is one of those recommended to use Great Suspender safely.
I should note that the manifest can specify an 'update url' that would enable auto-updating behavior: and it does, in fact, appear that this extension does. If you remove that line from the manifest, that behavior will cease.
Thanks, I actually just noticed the GitHub page says "The Firefox version supports automatic updates" so I guess it's safe to say that the Chrome extension won't automatically update?
F-Droid builds all of their apps from the publicly released source code. There's no reason why Google couldn't do the same, at least for apps hosted on well-known coding/review platforms like GitHub.
That's a good point. Is there some kind of time lag between the builds and the repo updates? If there's not time for anyone to check the code then the door is still slightly open for malicious code to enter the store without scrutiny.
Providing some sort of customer service that reacts at least when the press comes knocking? Use their considerable financial and legal firepower to sue whoever pushed the malwarized version of Great Suspender into the ground? Enforce that Chrome extensions be built on Google infrastructure from public source code repositories to prevent silent takeovers, at least for extensions with 10k+ subscriptions?
Alert people that they should rotate their passwords?
The last one is so crucial. Had I not read that LWN post, I would never have noticed that the extension didn't just run adware fraud but also snooped passwords. Seriously, fuck Google and their disgusting zero communication attitude.
The whole idea of extensions is fundamentally broken. You are letting an unknown person who you have no way to trust or hold accountable access all of your web data.
It can't be fixed without crippling the system. You can't sandbox permissions because the most basic and useful tools require full access to every website.
The only way I can think of is having all extension developers required to have their identity verified and from a country that follows some common law so that google can take legal action against malware developers.
I shouldn't be forced to run someone else's code to look at a publication. That's the entire point behind using something as ugly as XML (or it's simplified child, HTML) to begin with: this is supposed to be a document markup language. A method of annotating what an author would _like_ to have happen when rendering the data.
I seriously loath the fetish of creating pixel perfect displays which treat the end user as an actively hostile element; a passive consumer, rather than someone empowered to use the data for their own enlightenment in the manor their preferences prefer. (Font size, screen reader, dark / light mode, etc)
> JavaScript served by the sites; exact same issue.
It's not really the same issue.
If you are visiting a site that uses its own JavaScript, you can probably assume that if it's run by someone trustworthy, the script isn't going to try stealing your passwords or credit card number. There shouldn't be any reason for the web page to have access to anything that you're not providing it with anyway.
A browser extension (like an ad blocker) can access the content on every page you visit. That could be your bank, email account, social media - anything. If you have a malicious browser extension, it can see everything you do.
That was what mozilla did when they had resources to do so... now I think they have a different tactic but if you obfuscate it they require a full unadulterated source code link. google operate(d/s?) on the permissive model where you pay for a small membership and then they let you put it up unrestricted and if it gets too many reports it gets pulled. I think mozilla has the slight edge personally but either way i'd be wary of installing extensions willy nilly.
Yeah it's kinda a big dichotomy though... they only review very few extensions after doing their automated testing on them. my extension is marked as unverified and there doesn't seem to be a way to change that other than getting some unknown critical mass of users where they are interested in doing a manual review. oh well... not a big deal I guess and the extension is more for my own usefulness.
Making it so extensions don't auto-update themselves would be a helpful step in the right direction, it would cut down on the impact of when these things happen. Unfortunately I think we'll sooner see Firefox aping Chrome on this than the other way around.
The real security advice is: keep up to date with security patches. Staying up to date just because is not good advice.
Gentoo has a nice system, "Gentoo Linux Security Advisories", where you can periodically run a program called glsa-check which lets you know if you have packages installed that have security problems, what the problems are, and points to more info (like CVEs). You can even have it upgrade stuff on its own if you don't want to think about it. Something like this would be a nice feature for browser extensions.
It's a bit ironic that this security issue doesn't exist in Firefox [1] when they actually implement features promoted by Google in the Manifest V3. Why doesn't Chrome already block remote scripts?
[1] https://blog.mozilla.org/addons/2019/12/12/test-the-new-csp-...