I’ve always liked the concept of a localhost’d web app talking back to a localhost web server. It seems like a great way to get the cross-platform ease of use of developing the UI without having to do everything in browser, so you can optimize the heavy lifting and don’t end up with an Electron app pulling 8Gb of RAM and 100% Of 16 cores.
But I could never quite satisfy the nagging feeling that the localhost server could adequately be secured against outside network requests being routed to it, or as TFA mentions, inside network requests being routed away from it to an outsider!
This article helped enumerate some of the difficulties of securing such a service. Things like a memory-safe parser, checking origins, etc.
I wonder is there a definitive guide someone had setup, or even better a sample Golang or similar localhost server, which demonstrates the dozen-odd layers of checks and protections and magical incantations necessary to have such a server “secure” in the sense that a localhost UI is able to make requests to it to receive sensitive data but it should be safe from external attackers trying to spoof the same requests?
This is how the Dell system detect utility worked. Back in the day I found out that they only checked if the referring domain ended with dell.com, so 'notdell.com' passed their validation[1].
Instant easy unstoppable root RCE on a lot of dell machines from any website (it was a get request as well if I remember correctly). No built in auto update, no system tray icon, no idea if it's running. Good times.
I found something similar with HP as well[2]. Since then I'm ok with not having this functionality used and abused too much. There is too much scope for things to go wrong, and badly so.
> I’ve always liked the concept of a localhost’d web app talking back to a localhost web server.
We're doing exactly this prime-time with Relica: https://relicabackup.com (sorry, not much on the landing page yet, but we have emailed out some info about the UI already [1]). That technique will allow us to distribute backup software that works the same for macOS, Linux, BSD, and Windows, right away; screenshot: [2]. And it's very lightweight.
An added benefit of this approach: we were able to take its REST API and, after writing a small custom Go package, we instantly gained an elegant CLI [3] so it has a headless mode too! With all the functionality of the GUI [4].
I haven't seen much consumer software that does this, and I'm not sure why, so I feel like we're taking a bit of a risk, but I think (hope) it will pay off; the benefits have already started becoming clear and they're definitely appealing.
Thank you, yes, I think architecturally there are great advantages to splitting up an app like a client/server even when designed primarily to be accessed over localhost.
Obviously the “server” API is extremely sensitive and I think you have to assume it is effectively exposed to the outside world, even with a 127.0.0.1 binding and a firewall.
I guess if you make localhost users literally login and establish a session then you could consider yourself safe. But it’s a weird experience logging into a local application. So whatever you are doing to authenticate the request as local, it has to be unspoofable.
I just don’t think I trust the HTTP headers enough!
Yeah, we don't trust just anything that comes in on that socket. For example, we implement standard CSRF mitigations like checking Origin/Referer headers. We also don't use DNS at all in the local frontend/backend interactions and require the Origin to be exactly "127.0.0.1" (or the IPv6 equivalent) which is what we bind to.
(Edit: I just looked it up again and I'm 99% sure that web pages can't override the Origin header, especially when making cross-origin requests.)
It seems like there could be unexpected interactions with other services running on 127.0.0.1, which don't even have to be HTTP to cause problems - eg what if there's a service that echoes back the request in the response if it receives a request it doesn't understand? A remote web page could probably use that to bounce its request to your service but appear to come from 127.0.0.1.
Not necessarily. It could be running in a sandbox, or as unprivileged user, and accessing your app's API over localhost would allow for privilege escalation.
A login is not enough because of cross site request forgery. A site on the internet can include css, scripts or images from your service and generate get and post requests to it with the users credentials.
But for now we just hard-code a port. I personally prefer this since it's easy to use and convenient to remember. But if a lot of client machines have conflicts, I guess we'll change that...
Why not just change that? You are already in the same host, you can write the port that the server bound to on startup to a file and just read that from the client.
I've implemented something similar and went through the same stage "I will wait until someone complains" and they will happen, just prevent them right now :)
Great question. More details coming out soon on our mailing list and Twitter---and there's so many factors to consider---but in short, Relica:
- is designed for consumers who are not technically skilled
- works on Linux and BSD (I don't think Arq does)
- offers redundant cloud storage with up to 5 providers, and requires only a single upload of the data as compared to uploading it 5 times
- allows you to backup to local disk, friends' computers (or your own) running Relica (with authorization of course), or the cloud, which is totally managed by us so non-technical users can use it
Relica also does client-side encryption and deduplication, of course. Backups can also be restored directly with restic (open source), without requiring a reliance on Relica.
Basically, Relica is a good balance of "user-friendly" combined with features for power users.
We replicate your upload after it leaves your computer (at the packet level - we can't decrypt your data, we don't even have the key for it). There were quite a few technical hurdles we had to overcome to make this work, but I gotta admit, it's really cool to see it in action. :)
This does raise the question of how fast the upload will be - how's your network?
Switching from Dropbox to OneDrive doubled my upload speed simply because there's better peering from my ISP to Akamai vs whatever Dropbox was using at the time.
Hi there! Our upload infrastructure is designed to scale to anywhere in the world where we decide to put up relays. That makes the speed variable depending on where you are and where the relay is. We are still testing on our staging infrastructure and haven't deployed to our production networks yet, so it's hard to say right now what our speeds will be. But I'd love to know more about what your speeds are like now and what you expect with your backup service. Could you tweet at either me (@mholt6), @relicabackup, or email support-at-relicabackup.com and I'll get back to you on that?
Edit: I'm happy to be wrong. It seems the cross-domain checks dont allow this. I missunderstood stuff on the article/thread.
I want to say a few things regarding "insecurtity" of localhost. I find it completely insane and a huge overlook that the downloaded js can access your localhost PRIVATE services. It should be able to access external public services sans cross-origin etc policies. But the browser should not allow access to your localhost services by default, never. They are assumed to be private services. Everyone uses them as such, taking advantage of the (lack of) security implications.
If you download a program, and install it, you are open to the same problems. But, there is a huge difference between installing something and clicking a random link on the internet. Local services should be treated the same as files in your computer. Block everything unless explicitely allowed by the user.
I reuse ipv6's current syntax here just for the example. A lot of bikeshedding would be needed. It would also be ugly to expose those kinds of internals to an end user, so you'd need some additional technology on top of this to look pretty.
There is an unofficial unix: URI scheme. Browsers don't support it but system clients sometimes do, for example nginx when configuring upstream groups as a reverse proxy[1].
For running as a local app, what I really want is to be able to run the server as a UNIX-domain socket. (Well, I can do that fairly easily but what I really want is for browsers to be able to connect to one of these).
For a single-user app the main issue is that it could be running on a multi-user system so there's the possibility of contention for ports and so on, as well as the need to verify that the right person is connecting - while it's possible to just bind the server to the loopback address anybody on the same system can access it there so it's not necessarily secure enough. For localhost verification, accessing/setting some information from a file URL might work.
With the loopback address, encryption doesn't seem to matter too much: anybody capable of intercepting traffic between a piece of software and the browser will also be in a position to just directly read what you're typing. Possibly by looking over your shoulder.
However, one of the reasons I want a HTTP UI is to make it possible to use something like an iPad as an input device and there are definitely issues there when the service is something that's randomly stood up and torn down and usually running on a local network rather than the internet: in particular TLS really expects a centralized service so it seems anything other than a self-signed certificate isn't going to work and that comes with a bunch of scary messages for the user.
The other issues of authentication all seem to be much the same as for any other web app, though it seems to me that it's possible to streamline it a bit as it'll be quite common for a user already authenticated on one device only to need to prove that they're the same user on another.
I usually write a special middleware in golang for this. It sits in front of the actual router and http handlers and also buffers the response so it can modify it if necessary (this is unrelated to localhost security). Off the top of my head the checks are basically:
1. Who is the remote IP?
2. What does the origin and refer header say?
3. What does it say in the host? (I always enforce a whitelist for localhost applications)
4. Analyze the other headers? Anything out of the ordinary?
5. Enforce a strict CORS policy on ALL requests, no exceptions
6. Minimize contact points if external APIs are called (separate routers, preferably on a separate port)
7. Enforce a strict CSP policy (self only, no insecure eval or anything) on ALL requests
8. If outgoing requests are required, write a portal that they must go through (ie a common http client instance) that enforces where the requests are allowed to go)
> But I could never quite satisfy the nagging feeling that the localhost server could adequately be secured against outside network requests being routed to it, or as TFA mentions, inside network requests being routed away from it to an outsider!
Wouldn't that be solved by binding your listener to 127.0.0.1 (as opposed to 0.0.0.0 or your actual IP)? Sending a request to 127.0.0.1 shouldn't be able to send anything to the network.
The short answer is it depends on your firewall rules.
The long answer is if the sender configured their routing to point localhost to your host then your service would still connect and route traffic back to the foreign address. But this type of attack can easily be firewalled against.
There is also the potential problem of reverse proxies. However that requires local machine access anyway.
If browser and server are on the same machine, you remove a whole host of the barriers to identification. You could use any sort of local knowledge like system files or your NIC as identification. NB: I haven't thought this through, but I'm sure there's something to it :)
When I was working on e-detailing apps for pharmaceutical sales reps in 2007-2010 we did this. Originally the UI for was based on Flash and intended for Windows Tablet PC usage. I think the server security was pretty rudimentary.
But I could never quite satisfy the nagging feeling that the localhost server could adequately be secured against outside network requests being routed to it, or as TFA mentions, inside network requests being routed away from it to an outsider!
This article helped enumerate some of the difficulties of securing such a service. Things like a memory-safe parser, checking origins, etc.
I wonder is there a definitive guide someone had setup, or even better a sample Golang or similar localhost server, which demonstrates the dozen-odd layers of checks and protections and magical incantations necessary to have such a server “secure” in the sense that a localhost UI is able to make requests to it to receive sensitive data but it should be safe from external attackers trying to spoof the same requests?