We just ditched slack in favour of teams at our company, because slack wasn't "secure" enough. I feel like I see a headline like this twice a month. I can't ever remember seeing a similar headline for slack.
Exchange is actually still fairly prevalent, even among smaller companies. Although many of the smaller orgs that still have on-prem Exchange tend to have a migration plan to M365.
> Exchange is actually still fairly prevalent, even among smaller companies. Although many of the smaller orgs that still have on-prem Exchange tend to have a migration plan to M365.
and I hope they do. most of these smaller companies are sometimes sitting on really really old versions. "it works" is mostly the argument.
updating exchange sometimes can be painful. most of the time everything works, but sometimes things just break.
Let’s not ignore that if you’re a company self-hosting a highly available Exchange installation (plus backup infrastructure and maybe near-line storage solutions for mail), it’s almost certainly comprised of very expensive capital and > an FTE of labor, all which are entirely a waste of time and resources at this point.
There are vanishingly few circumstances where it makes sense for an organization to be funding deep expertise for the direct management of an Exchange environment. This has been clear for nearly a decade.
The capex to refresh that hardware is a ridiculous waste, so yeah, it wouldn’t surprise me if the people still running those setups have very aged installations (e.g. WinSrvr 2008-12), which are as great a risk as the Exchange Server software they’re running.
The gating factor is often the expertise to plan and execute a migration with minimal disruption and loss. It’s not simple, and it’s nothing like an exchange upgrade project. It’s a downright UGLY project if a company has been abusing their mail system for years (e.g. using their mail system as a document management platform since ‘99, allowing distributed PSTs, etc.). Seen it.
Teams is half way to the null position in the continuum - if it doesn't do anything and/or people don't want to use it, it exposes you less to vulnerabilities.
Can anyone recommend a solid website which aggregates CVE data in order to generate security scores for companies, platforms, open source projects, etc.? I know CVE data has a lot of problems, but I still suspect that this would be more objectively accurate than making security decisions based on gut feel.
I don't know of one, and making this judgement based on CVE data alone will not answer your question. Factors ignored include codebase size, customer count, internal CVE filing standards/criteria, etc.
The only signal I would conclude from CVE data by itself, is that I bias towards a preference for companies that regularly publish CVEs. The ones that don't publish CVEs regularly are hiding, ignorant, or actually secure (and the first two are more likely).
Aggregating cve data is probably not a useful signal. Products with more cves are not necessarily less secure than ones with fewer ones.
Possibly if a product consistently has high cves over a long period of time that might tell you something about poor security practices over that period (or before it). It might also mean that their security is now quite good!
You have to interpret the data I'm afraid. I can't think of any useful statistical measures you could use to compare aggregate data across multiple products.
It's kinda sad that they went through the whole monopoly suit over a century ago and here we are in Windows 11 getting OneDrive notices crammed down our throats even when there's an active work Office 365 subscription on the damn machines. (... and now Teams ad notifications in the Office suite)
It's been a while since I managed Exchange on-prem solutions, but it's quite rare to have an Exchange server listening directly on the internet. They were almost always behind security gateways which mitigated most, if not all, of these kind of remote exploit attacks. Also, it seems that most of these exploits require authentication which would hopefully be mitigated by implementing MFA.
Smaller orgs tend to get hit the hardest on these sorts of things, because the licensing for additional things like Edge Transport servers is harder to swallow, and they tend to have less fancy firewalls that are a bit dumber. Not every gateway can detect what a known exploit looks like.
I had always exchange on-prem running behind a postfix or qmail. You block all the internet traffic from your exchange and to your exchange. Thats network security 101, regardless of the company size.
Look at any major CVE and you will almost always see "...that attackers can exploit remotely".
It is logical - 9x% of large cyber-attacks are done digitally, not with physical proximity to the target.
Yet, we often focus on the vulnerability (zero-day, misconfiguration, business logic gap, etc.), rather than the exploit method (the network). Almost implicitly taking it for granted that the server (e.g. Exchange) needs to be exposed to the network in order to do its job.
Given the impact, shouldn't we double down on methods which enable servers to do their job without 'listening' to the network?
> Look at any major CVE and you will almost always see "...that attackers can exploit remotely".
> It is logical - 9x% of large cyber-attacks are done digitally, not with physical proximity to the target.
A remote vulnerability means without access to run local code on the machine. It does not have anything to do with having physical access to the machine.
good point. i simply meant that the vulnerability can be exploited from the network (with no (initial) root access to the machine) and so almost all of them are.
And how do you plan to talk to a deaf and mute server? To talk to something, you must listen.
To keep the metaphor going, "Listening" for a server just means that a client gets to say the first word to start the conversation - it is not otherwise special. The problem is the malicious conversation that follows, not who started it.
You can do things to control who talks to what, but an employee laptop can be compromised abused to perform the attack from a "trusted" client so that is no good.
Think of it like trying to protect your loved ones from misinformation and crypto scams - you don't want to shut them out of the world, telling them to only trust specific sources backfires if those people sell out or end up hosting scam ads, and teaching them to never be misled by anything might be impossible.
agree, but shouldn't we try to further reduce the attack surface? e.g. the 'server' only listens on networks which force 'clients' to authorize before those clients are given access to that network (not always possible, but often is). sometimes for example this can be an auth before connect overlay network extended to the host such that it is only listening on localhost.
and, agree, the clients can be compromised - loved ones can be scammed as you said - but your loved ones are a far smaller attack surface than being exposed to any attack on the internet (which too often can find its way into the dmz until day two security and L7 authorization tries to identify and terminate the rogue L3 connections).
Auth before accessing a network - perimeter-based security - is the old-fashioned corporate security model. It is generally considered a broken band-aid, as you end up with vulnerable or outright unprotected services behind a wall that many are allowed to pass.
For example, if you have a vulnerable service and allow employees access to it through the perimeter, you fail - the employee might have gone rogue or have a compromised machine.
this is where zero trust has steered us wrong, inadvertently.
absolutely perimeter security based on weak auth (being 'on' the WAN) is insufficient.
but improving internal security doesn't address all internet-based attacks - which are the vast majority. in those cases, auth before connect is a good practice - but the auth (authentication and authorization) itself needs to be strong, and part of a multi-layered approach (the WAF etc are actually able to be simplified and do their job better when they don't need to filter the entire internet).
It does not matter how strong the auth is. As long as people can authenticate - regardless of how - there's a hole in the perimeter, and your security reduces to the capabilities of what is behind. Attacks do not even have to be targeted - when you have hundreds, thousands or even hundreds of thousands of employees, the likelihood of a laptop getting compromised by even a random attack is pretty high, and then the foot is in the door.
In other words, any kind of proper defense require the internal services to be fully battle-hardened to withstand arbitrary attacks anyway as the perimeter is breached, or you set yourself up for a catastrophic security breach. In this case, the perimeter added nothing but cost, inconvenience and a false sense of security.
If you are afraid of exposing such services without a perimeter, you should not be running those services at all.
well said and i agree. i believe we are talking about different layers. your point - essentially assume your network is always breached - absolutely.
and don't you make that 'battle hardening' simpler and more effective by reducing the attack surface? e.g. by taking your servers off the internet (meaning your inbound firewall rules become deny all inbound (even 443)). so enforce (strong) auth outside your dmz, before allowing sessions on your network (or overlay network), even for APIs, B2B etc (otherwise your fw has exceptions).
and, yes, when that gets compromised, the attacker now needs to deal with the next set of 'battle hardened' layers.
meaning, shouldn't reducing attack surface and battle hardened services be an 'and', not an 'or'?
The actual problem isn't the network connection, it's the untrusted data. There's nothing you can do about that if your data is intentionally coming from untrusted sources, which is absolutely the case for a mail server.
Of course you can reduce attack surfaces by having things be tunneled and/or proxied, to have close-to-minimal network access to some appliance, but that won't fully save you from this sort of problem. In my opinion, the only obvious way out is to go category-by-category and eliminate each type of flaw by-design. I think even in an ideal world you could never accomplish this 100% of the way, or at least we're not close to a world where you could right now, but you can combine this with a layered approach to security, trying to eliminate single points of failure, adding hardening anywhere you can, and making tampering both as apparent and as unreproducible as possible (i.e.: ASLR harder and more often, randomize the order of things, don't expose internal IDs, etc.)
Tell that to Azure and its cross-tenant problems because they allowlist specifically all Azure IP ranges in Microsoft products. Gotta make them Azure spammers be able to bypass email filters, otherwise they won't pay premium, right?
In my opinion there's a huge conflict of interest between Microsoft and Azure as a cloud hosting service offering o365 integrations.
> Despite Microsoft acknowledging the reports, its security engineers decided the flaws weren't severe enough to guarantee immediate servicing, postponing the fixes for later.
Well that is an egregious display of bullhockey and an almost criminal level of negligence. RCE is basically a game over level exploit.
“All these vulnerabilities require authentication for exploitation, which reduces their severity CVSS rating to between 7.1 and 7.5. Furthermore, requiring authentication is a mitigation factor and possibly why Microsoft did not prioritize the fixing of the bugs.”
Would have been nice if they were clear about what authentication is required. If it is regular exchange user authentication then this is pretty bad. For those who don't know? The "exchange admins" group whose members' credentials I would expect to be able to dump as SYSTEM on an exchange server, is a quasi-domain-admin group. Taking over the whole domain after that should not be difficult for threat actors.
Shouldn't have to be stated but at any given time, any company regardless of security measures should assume there is at least one compromised host and stolen credential.
there are tools so that you do not need it, but well its not supported.
in recent updates there is a new supported scenario, you can remove the exchange but you need to keep the schema and powershell modules and than you do everything with powershell
Well there is aad connect and aad connect cloud sync. Btw I still don’t get it that they have no solution to remove the last exchange server. It can be so easy to make it work, but well…
Only fools and lazy government still use exchange servers. Anyone else sane has either moved to office365 or moved off microsoft (less the latter, good slaves).
I was doing massive HA Zimbra enterprise deployments in 2009. Shops were migrating from Exchange and *nix mbox accounts since 2003 because:
1. It was a megabitch to install and configure correctly and securely to work with Outlook, web, mobile, PDA, and other Microsoft products, and apply patch Tuesday updates depending on HA to work correctly to prevent downtime and data loss. (Blackberry BES was the sort-of answer to some crunchy mobile problems for a while because the idea of mobile apps hadn't fully materialized, so an E2EE solution seemed like the best path at the time.)
2. It was a continual attack surface, making the ops support or managed ops support far more costly than licensing. This was during the era of massive malware worms attacking the disastrous poor security of Microsoft products.
Fool me once shame on you, ...* I don't have much empathy for doing the same thing and expecting a different result other than breaches and loss of data and service.
How much worse does it have to get before someone holds Microsoft accountable for being criminally negligent? They have a 20+ year history of virtually non-existent concern for security.
The vulnerabilities don't pop into existence when some vulnerability researcher finally goes public about it and/or manages to convince MS to acknowledge+fix, but if you look at the vulnerability's actual existence timeline it's not that great a comfort.
But yes they've been cajoled to improve their reactitivity. Their customers have very high tolerance of security flaws still, partly due to self-selection.
This is much more about a widespread culture of spreading and accessing data within Amazon.com without much care for privacy, need to access, or auditing. Although I’m no AWS fanboy, that’s not an AWS problem.
Which isn’t to say Azure doesn't have some failings. The recent signing key issue shows that even at the largest hyperscalers there are places where the basics are missing.
I would honestly say the past twenty years they did a really good job, and only done a particularly poor job in the past few years as they've prioritized cloud and ad revenue.
Edge on Windows Server 2022 will happily load a bunch of MSN ad garbage now by default, after Microsoft had IE run in a restricted mode on servers for the previous twenty years.
The worst vulnerability, the RCE, was patched before Microsoft was even notified. The others are definitely bugs, but they do require valid credentials, which limit explotiability and therefore risk.
These aren't nearly as bad as many of the CVEs that other vendors patch every week.
It's hard to say what kind of files an authenticated user can read on fhr exchange server without the details on the vulnerabilities.
I don't think that's true, but watching the presentation, https://www.youtube.com/watch?v=IyPcXWIB990 specially the last part, where the researcher explains why he wasn't paid by the bounty, sounds like they still have a lot to improve.