Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It might be worth it to analyze a bit more.

Like, what would be different if the software would be closed source and developers payed by companies? I think it would be at least as hard to notice such an exploit and sometimes it might be easier (if the company is located in your jurisdiction).

Maybe the current mindset of assembling programs could be improved. There is a trend in some architecture to separate everything in their own container and while I don't think it can be directly applied everywhere that model gives more separation for cases like this. Engineering is an art of trade-offs and maybe now we can afford making different trade-offs than 30 years ago (when some things where decided)



… and everything old is new again.

DJB’s qmail, written in 1995, was made of 5 distinct processes, owned by different users, with no trust among them. Coincidentally, it was the MTA with the best security record for more than a decade (and also the most efficient one).

It would have likely had a similar record even if it was only a monolithic process - because DJB - but it was built as 5 processes so even if one falls, the others do not.


The problem is most developers and companies simply don't care, or are even hostile to improvements ("this is not the Unix way"). We had SELinux for over two decades. We dan do even more powerful isolation than qmail could at the time, yet nobody outside Red Hat and Google (Android/ChromeOS) seems to be interested. Virtually all Linux distributions largely rely on a security model of the 70ies and a packaging model of the 90ies. This is compounded by one of the major distributions providing only 'community-supported security updates' for their largest package set (which most users don't seem to know), which unfortunately means that a lot of CVEs are not fixed. A weak security model plus outdated packages makes our infrastructure very vulnerable to nation state-funded attacks. The problem is much bigger than this compromise of xz. Hostile states probably have tens if not hundreds of backdoors and vulnerabilities that they can use in special cases (war, etc.).

It's endemic not just to open source. macOS has supported app sandboxing since Snow Leopard (2009), yet virtually no application outside the App Store (where sandboxing is mandatory) sandboxes itself. App sandboxing could stop both backdoors from supply chain compromises and vulnerabilities in applications in their tracks. Yet, developers put their users at risk and we as users cheer when a developer moves their application out of the App Store.

It's time for, not only better funding, but significantly better security than the 70ies/80ies Unix/Windows models.


> no application outside the App Store (where sandboxing is mandatory) sandboxes itself.

Usually applications distributed outside the app store would simply not work (or be limited to close to useless) if sandboxed.

My pet example these days, DaisyDisk, cannot show what takes 10-30% of my space in the app store version. And can't delete protected files in Applications etc.

Which would be nice if it were a malicious free to play game, but it's an application that graphically reports what's taking space on your computer and optionally deletes stuff that you've chosen. So it simply can't work well inside the sandbox.


Usually applications distributed outside the app store would simply not work (or be limited to close to useless) if sandboxed.

I disagree. Sure, there are some applications that need to be distributed outside the App Store because they need additional privileges (like DaisyDisk), but there are many applications that are distributed outside the app store that could be sandboxed. Just to give some examples, why do Discord, Signal, Obsidian, Dash, or 1Password have unsandboxed processes? (1Password was in the App Store and sandboxed before it became an Electron app.)


Well we can't blame Electron for this, as much as I'd like to, since there are Electron apps in the app store.

Discord asks for the accessibility option to read system wide keystrokes for push to talk. Can sandboxed apps do that?

Also, no matter how secure the app store is, i'd very much like to be able to install applications without -ing Apple's permission. So having something available in the app store doesn't give me a warm fuzzy feeling.


> Discord asks for the accessibility option to read system wide keystrokes for push to talk. Can sandboxed apps do that?

You can register a shortcut in the sandbox, you cannot read system-wide keystrokes from the sandbox.


Some key apps on macOS do sandbox themselves, most obviously Microsoft Office.

The main issue is that the tooling around sandboxing is poor. If there are violations the best you're going to get is an opaque message logged to an unbelievably verbose log torrent. Also, the low level sandboxing APIs you need to really batten things down for internal components aren't documented. Chrome uses them anyway, and they just have to rely on the fact that they're so big that Apple won't break them. But if you're smaller, it's a risk.


Playing devil's advocate: is the current level of security the biggest problem of the computer systems?

Just two (similar) examples that cross my mind: large monopolies (ex: thinking Microsoft in the 90's), imbalanced power between actors (ex: only nations states/huge companies can do something not a small group of hackers).

I think diversification and accessibility of the technology would solve more problems overall rather than just focusing on security. It is just hard to strike a balance between efficiency and diversity (ex: one Linux distribution might be efficient resource-wise but it is not diverse; how many and how different would be diverse enough?)


it was also essentially unusable without a crapload of third party patches that DJB would not include into the master release, but yes it was quite secure :-)


And it was highly vulnerable to denial of service attacks. It didn't check if the mailbox was valid during the envelope phase, so it would queue basicaly everything, then check the mailbox and send a bounce if necessary. Sending thousands of messages to random boxes (dictionary spam attack) would queue thousands of bounce messages that would be rejected by the (faked) sender domain, bringing the Qmail server to it's knees. As me how I know this...

Thing is, in most companies, is cheaper and more efficient to deal with a sporadic vulnerability than having your e-mail system DOSed every other week.

This is the kind of compromises that normal people and companies have to do all the time, but radicals and cryptopunks like DJB can't seem to understand. Sure, he's a brilliant mathematician and cryptographer, but his grasp of reality outside academia seems very flimsy, IMO.


My qmail setup in 2000, on a humble beige box, was occasionally under a “thousands of bad addresses” attack, but I only found out about it a few days later while reviewing the logs. There surely was a threshold where it would be down on its knees - but “thousands” and even “tens of thousands” wasn’t it. The exchange server it replaced, though, would crash and burn very often, for a variety of reasons.


Does any private Microsoft/Google/Apple/whatever program have any backdoors? We don’t know and we will never know.

At least with open source we are able to detect them.


I don't think this is necessarily true. People do a lot of reverse engineering of proprietary OSes and a lot of vulnerabilities are found that way (besides fuzzing). And the tooling for reverse engineering is only getting better.

Also, let's not forget that this particular backdoor was initially found through behavioral analysis, not by reading the source code. I think Linus' law "given enough eyeballs, all bugs are shallow" has been refuted. Discovering bugs does not scale linearly with eyeballs, you need the right eyeballs. And the right eyeballs are often costly.

If your implicit premise that having the source code available makes it easier to analyze code than closed source, you can also flip the argument around: it is easier for bad actors to find exploitable vulnerabilities because the source code is available.

(Note: I am a staunch supported of FLOSS, but for different reasons, such as empowerment and freedom.)


Yes, and they are eventually discovered by reverse engineering.

Example: https://en.wikipedia.org/wiki/NSAKEY


Google's Fuchsia OS looks promising but it doesn't look like (or something like it) will go anywhere until the world accepts that you probably have to build security into the fabric of everything. You can't just patch it on.


I think Google has given up on Fuchsia, outside some specific domains, right?

I think currently even Android, iOS, and ChromeOS have far better security than most desktop and server OSes. I think of the widely-used general purpose OSes, only macOS comes fairly close because it adopted a lot of things from iOS (fully verified boot, sealed system partition, app sandboxing for App Store apps, protected document/download/mail/... directories, etc.).


QubesOS is the closest we have, in terms of better security model, in the desktop area


There isn’t much closed source software which is depended on as heavily as things like xz. The only one I can think of is Windows, which I think it’s safe to assume is definitely backdoored.


Companies are infiltrated all the time [0]

However the incentives even if a company detected the infiltration is to keep quiet about it. Lets say that a closed source "accessd" was backdoored, a database admin notices the new version (accessd 2024.6 SAAS version model 2.0+ with GPT!) is slower than the previous version, they put it down to enshittificaiton. Or they contact the company who has no incentive to spend money to look into it. There's no way the database admin (or whoever) can look into the git commit chain and see where the problem happened.

[0] https://rigor-mortis.nmrc.org/@simplenomad/11218486968142017...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: