I think we're missing some information here. The supplied link says that the applications have been renamed due to legal threats. This seems completely reasonable to me. The names of the apps are trademarks and for a security product, who builds it is important to the integrity of the mark.
I'm trying to remember how Android works, but I seem to recall that you need to sign the packages differently on Play and Fdroid. So you literally can't redistribute the same Play package with Fdroid (someone correct me if I'm wrong). This means rebuilding... and hence rebranding.
It seems that MM was asked to provide a build for Fdroid. He decided not to. That's completely his right. He doesn't go into a lot of detail about why he has decided this, but it's completely up to him.
So all I can tell is that there is an Fdroid version, which has a different name. You can't switch easily between the Play and Fdroid versions because of code suckage... which sucks, but isn't a GPL violation.
Is this just a tempest in a teapot, or am I missing something?
Thanks for that link. It is much more informative than the other one. I can see where he's coming from. Some of the things he wants as a developer are things that I don't personally want as a user (automated updates), but then I can build and install the thing myself, as he says.
Correct me if I'm wrong, but couldn't he host a private repository like the Guardian Project does with his own binary signed with his own key? People would have to manually add the repo, but I believe F-droid lets you do that. Fdroid just wants everything they build to be signed with their key, which to me makes sense. The Pale Moon dev had the same or similar reservations last I checked, but I heard nothing about a private repo.
It would be more work for him of course, maintaining a repo and pushing updates to two locations, but I don't think it would be too much extra work and it sounds like a lot of people are looking for a non-GP repo. Pushing to your own server is easier than updating in an app market anyday.
At the end of the day, he can of course spend his time how he wants, I just think there's a loud if not large group that would like a Google Play Services-less Signal.
Unless I'm mistaken, he's already done that. He just had to rename it to something else. So AFAICT there is nothing to see here. As you say, F-Droid already does that with several packages (and even renames them when requested).
Some people are clearly angry, but it seems that this is yet another case where people are angry because they don't understand the GPL or don't understand the situation (or both).
If you're talking about https://fdroid.eutopia.cz/, I don't think that's the same thing. This isn't Moxie's and because of the name change, not completely moxie's code unless he's got a build flag for changing the name everywhere, not sure.
I download things from F-Droid, mostly for convenience and security since I get notified about updates, but I understand it is another person/group I need to trust. I trust, for whatever reason, that they will build directly from source and update frequently.
Ways this could go, as I see it:
0. Moxie's code, moxie's build, Google's repo - The current way I know of getting moxie's build, but you have to trust Google as well.
1. Moxie's code, moxie's build, moxie's repo - I think this would be best, and what I was talking about.
2. Moxie's code, fdroid'd build, fdroid's repo - what I would also like, but moxie publicly discourages unofficial builds so fdroid doesn't want to touch it. Up to them.
2b. Moxie's code, moxie's build, fdroid's repo - a hypothetical some people have floated, but fdroid won't host binaries they haven't built from source and I don't blame them and moxie wouldn't provide an official build outside of Gplay either way.
3. Moxie's code modified with new name by 3rd party, 3rd party's build, 3rd party's repo - eutopia.cz's build and repo. Another person to trust to build correctly and update frequently
I don't think it's just some people being angry. I think things could be better, but it doesn't sound like they will. Moxie wants F-Droid to provide services similar to what GPServices provides before he'll do 1. F-Droid will never do this(hopefully). So we have to live with 0 or 3. I think scenario 1 > 3 > 0, other people think otherwise.
Except they are. The F-Droid devs kept claiming they weren't. Moxie asked them to describe the system and surprise, surprise the keys are stored on a machine that is connected to a network that is connected to the internet. It turned out that the F-Droid devs didn't/don't understand the concept of stored offline vs online.
What? APKs are signed by the developer before they uploaded to the store and the signatures are verified by PackageManagerService which is a part of AOSP.
whether implicit trusts such as for example Google Play Licensing[1] or explicit trusts such as for example the set of Certification Authorities Android devices ship with, you are trusting Google in many ways.
In the example you provide, you are trusting google in the same way you are trusting every SSL cert authority. What you are responding to is a reference to the signing of the APK, which is not (and cannot be) done by google even if the security of the transport layer is compromised.
Thanks. Yes, someone else posted a better link with the rationale. Essentially he wants the user experience for the average person to have auto-upgrade so that he can fix bugs, etc reliably. For those that have more tech experience and can judge whether or not they should upgrade, they can build their own very easily.
Since he signs it with his own key, nobody can force an auto-upgrade except him. So you either have to trust him for this version and all future versions, or not trust him and build your own. You can package and sign your own version, but apparently (I haven't actually seen the reported twitter comments) he doesn't want you to name them the same (which is clearly a reasonable trademark issue since any build not signed by him could contain anything).
I personally don't have a problem with that and it certainly isn't a GPL violation.
> I'm trying to remember how Android works, but I seem to recall that you need to sign the packages differently on Play and Fdroid. So you literally can't redistribute the same Play package with Fdroid (someone correct me if I'm wrong). This means rebuilding... and hence rebranding.
You can distribute the same build on F-Droid and Play, and also signed with your own key, if you use proper reproducible builds
(And not the TextSecure variant of "let’s download this huge image and let it compile the app", because that opens you to evil compiler issues).
I don't buy his arguments. It's one thing to say we have to be on Google Play Store or we have to use phone numbers despite the privacy implications because that is what people use. But ignoring much of the developing countries (see whatsapp), China or the people who are your strongest user base by saying "you can just" isn't pragmatic at all.
Nor is it actually reasonable that we should expect to or rely on a few people to secure something that should be a fundamental and a fundamental right of communication. Not to rant to much, but it feels like going to parties (conferences) and talking about how much good you do and then being dismissive in the real world is how much of the security industry operates and that Signal has just become the latest excuse to why nothing has to be fixed.
I'll give him credit for the whatsapp integration though. More people in the field should consider working with companies where they can have a lot of impact.
Signal itself works, but since Google is blocked no phones are sold with Google Play Store and even if you hack it onto your phone (which will break when it wants to update play services) it will drain your battery trying to connect to blocked services. Unless you use vpn (which will drain battery by itself and also eventually be blocked), but notifications probably still won't work because of the phones original firmware. So yes it works if you hack it onto your phone and then remove play services and checks the application manually. Until it wants to update the app that is, which is often.
Point. It doesn't really work because it only supports the Google Play Store, even as most Chinese phones can load apps directly (because of the fragmented ecosystem). So at least it doesn't work in the "prevent mass surveillance" way.
I guess maybe it works from the Apple App Store? (which isn't blocked)
I reproduce here the dead message from "uola" I think the message deserves a proper response and not to be flagged.
"Signal itself works, but since Google is blocked no phones are sold with Google Play Store and even if you hack it onto your phone (which will break when it wants to update play services) it will drain your battery trying to connect to blocked services. Unless you use vpn (which will drain battery by itself and also eventually be blocked), but notifications probably still won't work because of the phones original firmware. So yes it works if you hack it onto your phone and then remove play services and checks the application manually. Until it wants to update the app that is, which is often.
Point. It doesn't really work because it only supports the Google Play Store, even as most Chinese phones can load apps directly (because of the fragmented ecosystem). So at least it doesn't work in the "prevent mass surveillance" way.
I guess maybe it works from the Apple App Store? (which isn't blocked)"
By giving NSA the only thing what they want: metadata from Google
>2) Stop targeted attacks against crypto nerds.
Who don't have google services on their devices and don't use google chrome... yeah. Thanks for helping me so much.
The Senate is considering reauthorizing the law the NSA says authorizes it to collect hundreds of millions of online communications from providers like Facebook and Google as well as straight off the internet’s backbone:https://theintercept.com/2016/05/10/senate-kicks-off-debate-...
This topic was about GCM specifically, which, since it goes through Google servers (unlike, say, my arbitrary browsing, or the network profile of my arbitrary apps), is directly available to Google.
Speculating that Google may have access to my full network profile is a little off-topic, but yeah, if they did have that data, they could certainly do similar analysis on it.
The answer is "GCM may reveal more to Google than one would expect from using an E2E encryption application (like metadata, and more than one would initially assume)".
The person I initially replied to was talking about Google, GCM, E2E encryption, and that metadata won't reveal anything to Google except time/date of a single message and the message size. I pointed out there may be more information there.
I have no doubt that the NSA can do traffic analysis, or may have some of this data already... I'm not sure why that is in the replies to my comments in this thread.
That's only a meaningful answer if simple traffic behavior wasn't already revealing the same information. Was it, or wasn't it? I feel like I'm having a hard time getting a straight answer.
Does Google already have simple traffic behavior? If yes, then this information is nothing new to Google. If no, then this information may be new to Google.
Form a straight question and you'll get a straight answer.
Are you in the right thread? The discussion here is about what information Google can get from GCM messages, not what the NSA can get from GCM messages.
And even though your question is off-topic, I already answered it above.
Why would you accuse someone of being allergic to a technology when they are simply answering questions about it? If you disagree with the actual topic of discussion - that Google (not the NSA) might get more than just "message sizes and timestamps" out of an E2E-encrypted app which uses GCM messages - then have a normal conversation about it instead of bringing up the NSA repeatedly.
And if not, then stop making baseless and inflammatory accusations.
> Are you in the right thread? The discussion here is about what information Google can get from GCM messages
The parent poster of the post you initially replied to asserted that Signal was "giving NSA the only thing what they want: metadata from Google", so I guess that's where tptacek is coming from.
On a side note, Google can't actually know the message sizes because GCM is used without a payload.
More worried about NSA correlating the two after getting the data from Google, but the one good thing about their centralization model probably is that with millions of users to a central server (and something you do as often as texting) this makes timing analysis extremely difficult.
But the "observer" can still know which mobile phone is yours and who communicates with whom? Especially if the "observer" has the info from the Signal servers.
Edit (as i can't post you reply to your answer):
And based on the NSA principle of the "thee levels of distance" everybody is reachable as long as some common numbers are in our contact lists which we happily upload.
The problem is that, at that point, Moxie couldn't confirm that the uploaded binary was the same one as packaged by their official release. Secure communication protocols are irrelevant if the client which you are communicating on is compromised.
What you have described is pretty much an opposite of how F-droid works. One can't just take binary (whether official or compromised) and upload it there. [1]
Instead, to publish an app there, you need to provide source code repository [2], and their build farm would build it, sort-of [3] providing guarantee that source code you can inspect is the same one you got running on your phone.
[1] There are exceptions, i.e. apps uploaded as binary-only (for example Firefox), but those come with big red warning that user sees before installing them.
Signal has reproducible builds for Android: https://whispersystems.org/blog/reproducible-android/ ...that just doesn't work with F-Droid. And building on their farm means that you have to trust them, and their build farm becomes a prime target if you want to infect lots of apps at once. In the play store, you sign your build, and Android will only let you install builds signed with that same key as updates. By moving the signing to F-Droid, you have to completely trust them.
I assume the Docker image provided by Signal does reproduce the Android build, but since the Docker image is a giant non-reproducible binary blob it is (as stated in the blog post) a "weekend hack" rather than a useful reproducible build system.
A user that is prepared to access the apk can verify the signature of the app they have on their device.
(So the compromise of F-Droid that results in a signed, compromised binary can't happen on Google Play, the apk is signed before it is sent to the store)
What does Play Services have to do with anything? APKs downloaded from the Play Store are signed by a key the developer holds and validated by Android's PackageManagerService which is open source.
He's doing great and useful work, there's no doubt. But requiring a phone number for an internet instant messenger is still a deal breaker even with Chromium as an alternative.
The most useful piece of metadata available to anyone harvesting user profiles for surveillance or profit. Governments must love phone numbers. Getting an anonymous phone number for each separate service you register with is practically infeasible.
I worry about how influential people like Moxie Marlinspike are seemingly turning the modern 'mobile-first' development paradigm into a 'mobile-only' mindset. I don't believe in secure and private computing when you are making it very hard for people to use your tools on (or via) anything but the two dominant mobile operating systems.
"Yes, phone numbers are public enough that they are shared everywhere, but unique enough to lead to a single person not to speak of that persons movements. And "just use twilio" isn't a motivation for using phone numbers in the first place.
If he had said "the benefits of finding friends are greater than the privacy implications" or something like that there would at least been a case for a discussion, but now he's seemingly saying "oh, if you really care about privacy you could/should use a fake phone number"."
---
Personally, I don't know how "a fake phone number" setup can be implemented, especially in the countries where each phone number is assigned to one ID at the time of purchase, so to me "use a fake phone number" sounds like "let them eat cake."
> If he had said "the benefits of finding friends are greater than the privacy implications" or something like that there would at least been a case for a discussion
This has already been discussed at length many times. Perhaps uola hasn't seen this blog post yet:
That post still goes from the starting point of "social graph" and "5000 users in the contact list." It's completely the opposite of what's the most reasonable need: say if I want to communicate using the encryption only with my girlfriend, I don't want any of other contacts be ever seen by any server, and I can agree with her how we'll identify each other, but we surely don't need real phone numbers transferred to any servers, and we don't even have to use always the same real numbers.
Yes, phone numbers are public enough that they are shared everywhere, but unique enough to lead to a single person not to speak of that persons movements. And "just use twilio" isn't a motivation for using phone numbers in the first place.
If he had said "the benefits of finding friends are greater than the privacy implications" or something like that there would at least been a case for a discussion, but now he's seemingly saying "oh, if you really care about privacy you could/should use a fake phone number".
This also blocks me from using TextSecure/Signal, because they require Google Play Services at run-time.
I'm using a BlackBerry OS 10 device, which can run Android apps, and I even have Google Play running on it, but Google Play Services is stubbed for a large part on BB10, making some apps (such as Google Maps, Google Calendar, and Signal) impossible to use.
Why a security/privacy oriented application such as signals wants to bind so strongly with Google's services, I don't understand.
They want more users. Google actually makes special deals with carriers to optimize GCM, so it's best for battery life. It depends on your carrier, but the websockets fork LibreSignal can use up to 5x the battery life. On my phone using the fork bumps battery usage from 1-2% to 2-4% (Sprint), but is totally worth it to avoid Pentagon/Alphabet (I mean Google).
Yeah me too, I'm using Conversations[0] on Android and it's pretty awesome actually. Pretty actively developed with a smooth UI and no Play services or phone number requirement.
Running a really light prosody[1] instance on my server to host my own XMPP connection, although since it's all E2E, I could have used a public one.
I've run this, and I also found it easy to set up and use. However, my understanding is that you only get end-to-end encryption with OTR, and that OTR can only be used with both parties online at the same time. Am I mistaken about this?
I think offline encryption works fine if your XMPP server implementes XEP-0198[0]. Prosody doesn't out of the box, but there's a community plugin available[1] for it. The plugins are really easy to install if you're running prosody already, if you're not then ask your XMPP name host. Stream management requires client and server to support this, which Conversations does so I'd assume your server is lacking.
If you're in an OTR converation already, I think the server would just get encrypted garbage, hold it until the other party comes on the network and then pass it off and their client would decrypt it. I haven't read the protocol though TBH.
Just some more information if anyone else is curious.
I have this setup and it works well enough when you have one device, but when you have multiple, I get garbage on one device and decrypted messages on the other. So if I keep my computer on, but switch to my phone I have to explicitly tell the sender to send messages to my phone instance instead of my computer instance, otherwise I get garbage. This is obviously because of e2e. It doesn't seem like there's an easy way to enable OTR multi-end e2e encryption/decryption sofar as I know.
OMEMO[0] does this though, conversations supports it and Gajim has a plugin for it[1]. It's experimental and the plugin author warns to not use it for sensitive information FYI. Haven't tried it out as I'm using Pidgin atm, but plan to sometime soon.
I'm not familiar with this but it looks like an interesting project. My problem however is that I mainly do not like that GPL'd software isn't allowed to be redistributed. I might not be properly informed on this issue (and please correct me if I'm wrong) but from what I've read that seems to be the case.
Another thing to remember is that (IIRC) -for approximately forever- Red Hat Enterprise Linux has been a Linux distro that's composed almost entirely of Open Source software, but prohibits folks who receive the binaries from redistributing them.
It's the branding that allows Red Hat to effectively restrict distribution of binaries, due to trademarks. Since the source is still available, GPL is fulfilled.
My memory of the mechanism was a little different but the trademark component is obviously a part. Something in the EULA like "If you distribute RHEL binaries without our consent, we'll cut off your access to security updates and patches ASAP.".
Regardless, people seem to forget (or perhaps never bothered to learn in the first place?) that the GPL doesn't care to speak to binary distribution, just source code (and -sometimes- build instructions) distribution.
He can only demand that they don't make builds using the same name or logo. That's his right. Who cares? Stop whining like we're supposed to care that you have to change the name of a piece of free software before you can distribute it.
Untrue. He only wants distribution through channels that provide the same security assurances and deployment features that Google does through the Play Store. [0][1][2]
He's also quite open to replacing use of GCM with WebSockets or some equivalent tech, but if you don't use GCM, the replacement is likely going to significantly reduce battery life of phones on cell networks. [3][4]
> ...and even went as far to demand that free/libre Play-alternative F-droid removed their build of TextSecure.
That's because -in part- the F-droid project managers had (and -AFAIK, but I haven't checked in quite some time- continues to have) very serious issues in regards to their APK signing key handling procedures.
Signal is GPL'd. Anyone can take the code and do what they like with it, as long as it conforms with the license terms. However, it's very clear that Whisper Systems does not want people distributing Signal-branded builds on app distribution platforms that don't provide Whisper Systems the security guarantees and management tools that they need to get their jobs done.
In short, you're free to distribute custom builds of the Signal-Android, Signal-Desktop, TextSecure-Server, and Signal-iOS projects. However, it'd be nice of you to:
* Stand up an instance of the Signal server software on hardware you control, then point your builds of the Signal client software to your server.
* Rename the software that you're redistributing, make up your own logo, and make it abundantly clear that -while your work is based entirely on Signal's code- you're neither operating with the explicit support of Open Whisper Systems nor are you likely to be providing the same security guarantees that they are.
Pretty much this. The noise about f-droid tends to be a bit misguided, and fwiw building an apk from the Signal source on github is pretty painless.
I'm actually much more concerned with the size of the app; imnho it's way to big and way to hard to even begin to audit. I've been trying to figure out if there are some simple core that could be extracted for building a minimal cli app with minimal media support -- but so far I'm not too hopeful.
It's just far too monolithic a project IMNHO. I want a small easily auditable library along with a small app - but it appears I'll have to implement that myself :-/
Still, at least the code is open, and it builds :-)
These are some good points but when you say "He only wants distribution through channels that provide the same security assurances and deployment features that Google does through the Play Store." it must be noted that this isn't a guarantee of security.
A quick search of 'Google Play malware' returns many results from 2016 and going back to when it was still called Android Market. This isn't hand-waving, there are many concrete and specific examples of security lapses in the Google Play store and this is a persistent problem. Plenty of bright people over there who care and are working on it I'm sure, but not solved yet.
Bottom line is it's his decision to make, but the only certainty that using Google's store brings is that you must have a first-party relationship with Google to use his app. It's better than downloading APKs from some warez site but not a guarantee of security. Framing it this way misses the bigger picture.
> ...it must be noted that this isn't a guarantee of security.
Yes, and if a nation-state is after you, you almost certainly don't have the OPSEC discipline required to keep your computing devices secure. Security isn't binary, it's a gradient. Ever more secure devices require ever higher costs, whether they be monetary costs, lost time, or procedural complications.
> A quick search of 'Google Play malware' returns many results from 2016...
And even a brief dig into the details of those "malware" reports reveals that -if the software was distributed and installed through the Play Store, and the Android device user did not have "Allow installation from unknown software sources" checked- all that pretty much all of that "malware" does is exactly what the permissions it requests permits it to do. [0]
Protip: If the software asks for permission to read your contacts, location information, and system log data, don't be surprised if it exfiltrates that information via the pretty-much-always-on Internet connection that's built into the device it's running on. :)
The fact of the matter is that Google is rather good at software security.
> ...but the only certainty that using Google's store brings is that you must have a first-party relationship with Google to use his app.
Not to be an ass, but you either haven't read or haven't understood either the technical aspects of what the Play Store gives you, or the target audience for Whisper Systems's software.
[0] Vulnerabilities like stagefright are excepted from this list because they are vanishingly rare. I challenge you to find another actual Android sandbox escape. :)
It is your assumption that the perceived threat is a 'nation-state', not mine. Personally I'm not to worried about them and I'm more concerned with advertisers and data brokers.
Let's take that brief dig into those 'malware' reports, shall we? Here's one from the Wall Street Journal[0] from last year. Some choice quotes:
"Security-software maker Avast called out a trio of malicious Android apps that were, until recently, available in the Google Play app store. The apps would go into sinister mode after 30 days on a device, and begin spamming users with advertisements, Avast said in a company blog post. Google told the Journal that, as of now, the infected apps have been pulled from Google Play."
"For those who had the apps installed on their phones for more than 30 days, a threatening ad would pop up each time they unlocked their phone, saying the device was out of memory, experiencing a security hole or some other false claim, Avast said. The pop-ups would then route people to websites where more malware could be installed on devices, said the security company. Anyone with either of the known apps installed should delete them immediately."
Do we blame the users since the apps informed them about permissions?
I've read and understood the same things you have, and reached a diametrically opposite conclusion. Maybe this is because I am also taking into account Android's severe updates problem, which is typically left to the carriers and handset makers to implement. Carriers and handset makers want to sell new phones, not patch old ones, who didn't see that one coming? Good on Google for patching Android security holes, too bad they don't reach the majority of users. I'm sticking to my original view and I guess we'll have to agree to disagree.
> It is your assumption that the perceived threat is a 'nation-state'...
... That was the opening sentence of my paragraph that demonstrated that there is no such thing as "guaranteed security". If you think that there is such a thing, then you're going to be confused about many things when you think about security matters.
To the rest of your comment:
You need to keep things in perspective. [0]
* On Windows, Mac, and Linux malware can read and write to anything that the user who installed it has access to. It can read what other programs have stuffed in to RAM... including your password manager's temporarily decrypted passwords. It can often record and exfiltrate the contents of one's screen and the output of one's microphone. It can often install keyloggers that capture banking, email, and other credentials. It can often encrypt personal data, lock the computer, send the computer user a friendly ransom note, and then decrypt that data once payment is received. Unless the malware is ransomware, it can do all this without ever notifying the computer user.
* On Android and iOS, malware can do exactly what the pre-installation permissions list says it can. Malware cannot read or write to data for which it does not have permission to read or modify. For instance, malware cannot be a keylogger unless it requests the replace system keyboard equivalent permission. For Android malware to read what other programs have stuffed into RAM, it -effectively- has to be authored and signed by Google and baked into the system image.
* Are you old enough to remember popup web advertising? Because (other than the platform) that's exactly what the section you quoted from that article is talking about. In the PC world, popups are called "annoying" rather than "malware".
Is the permissions system good? No. However, it's dramatically better than what you get in the PC world.
Remember that Signal is software that is intended for rather secure communications. Signal's threat model [1] requires that other programs running on the system be unable to tamper with the data that Signal puts into RAM and on to disk. You can get those properties with a PC, but so many non-technical users' PCs have been hit with real malware ages ago that that's kind of a lost cause. Actual "take over your computer" malware doesn't exist in either the Play Store or the App Store. This is really good for the average computer [2] user.
> Maybe this is because I am also taking into account Android's severe updates problem...
Wot? Other than the ~3 year update window problem, this hasn't been a wide-spread problem [3] since Google put critical system stuff in the Google Play Services package (rather than baked into the system image) ages ago.
> I've read and understood the same things you have...
Read? Maybe. Understood? Clearly not. I hope that you'll take the presence of strong differing opinions backed up by sound reasoning and hard facts as a signal that some of your fundamental assumptions about the topic are incorrect.
[0] Indeed, maintaining perspective is a significant part of talking about security issues.
[1] Learn what that means if you don't already know.
[2] Mobile or otherwise.
[3] Yes, you can point to abandoned phones. I can point to people with missing limbs, but that neither means that the majority of people are missing limbs nor does it mean that there's a severe missing limb problem in the human population. :)
At issue further up the thread was whether insisting on using Google Play to distribute Signal for security reasons was sound logic, right? Forgive me if I missed the point but I thought that's what we were debating. That's why I provided the specific example above which showed malware distributed via the Google Play store. this advanced my position that an app insisting on distribution via Google Play store exclusively is not automatically more secure than alternative distribution methods.
What part of your response above advances your counter-argument please? The closest you came was saying that "Actual "take over your computer" malware doesn't exist in either the Play Store or the App Store." but this is directly contradicted by this story from just yesterday: http://www.slashgear.com/viking-horde-malware-uses-google-pl... (plenty of other sites covering this too)
I'm also unclear about missing limbs being somehow analogous to 'abandoned' phones. Let's set that aside and have a look at this chart on Wikipedia: https://en.wikipedia.org/wiki/Android_version_history Can you look at that and still say that updates are not a wide-spread problem? It links to sources and indicates to me that most Android phones are not using a current version with up-to-date security patches. Do you disagree?
I've stated from the beginning that I disagree that distribution via Google Play is any guarantee of security and provided links to specific examples of malware in the Google Play store as evidence that Google Play has repeatedly been used to distribute malware to many millions of devices. Can you rebut this?
As an aside, I'm reading your reply and I'm thinking to myself "If someone needed a comprehensive 'how-to' for a straw man argument then this one is pretty good". Also please consider that popups were a real problem as late as the early 2000's. That means you must be thinking I am in my early teens, if I am to take your "Are you old enough to remember popup web advertising?" comment at face value. I have to say it isn't helping to persuade me, and is having the opposite effect.
So your point is that the Play store isn't perfect at stopping malware and that negates all benefits over just installing random unsigned APKs?
Besides, Moxie's point is that the store installs what he signs and nothing else. Perhaps the system wouldn't catch malware but if it prevents people from running builds he didn't make it sure lessens the window of opportunity.
> The closest you came was saying that "Actual "take over your computer" malware doesn't exist in either the Play Store or the App Store." but this is directly contradicted by this story from just yesterday: http://www.slashgear.com/viking-horde-malware-uses-google-pl.... (plenty of other sites covering this too)
That link says these two things:
> There's a new piece of malware in the wild, and it's turning phones and tablets alike into a part of a large botnet.
This is use of both the ability to execute software within Android's sandbox along with the ability to transfer data using HTTP/HTTPS to send data on the Internet. That's what a botnet is.
> While unrooted devices are susceptible to the actions listed above, rooted devices are at a greater risk. On these devices, additional software is installed that allows it to execute any code remotely. What's more, it uses your root access privileges to make it difficult, if not impossible to manually remove the malware.
This doesn't affect anyone who's using Android as either distributed by Google, or by anyone who's distributing an Android-branded phone.
That is to say, unless you purposely go very far out of your way to install custom system software that deliberately weakens critical Android security features -thus putting your Android device pretty squarely in the realm of PC-level security-, then there is no software in the Play Store that will take over your Android device.
Pointing to that and claiming that it's evidence of a failure of the Play Store is like winding your seatbelt tightly around your neck (rather than securing the buckle to its clasp), driving at highway speeds straight into a bridge support, and then blaming the seatbelt when your head pops off of your neck. :)
> At issue further up the thread was whether insisting on using Google Play to distribute Signal for security reasons was sound logic, right?
No. The assertion was that Moxie only wished to distributed on the Google Play store. I addressed this complaint. From my first comment in this sub-thread:
>> He only wants distribution via Google...
> Untrue. He only wants distribution through channels that provide the same security assurances and deployment features that Google does through the Play Store. [0][1][2]
You then went off on a tear about how the Play Store doesn't provide "guaranteed security", with the strong _implication_ that this fact means that distribution through either the Play Store or the App Store is no better than distributing through a Market that performed no malware scanning, stripped the developer-provided signature from the software they distributed, signed all software distributed in the Market with the same signing key, and (because their code signing system was automated, rather than manually run) kept that signing key online and on an Internet-accessible computer, rather than in cold storage that gets occasionally attached to an airgapped computer.
The difference in procedures is crucial.
> I have to say it isn't helping to persuade me...
Your rhetorical style strongly indicates that you're more interested in verbal sparring than transfer of information. Maybe some months or years down the road you'll go back, revisit conversations like this one, and grow to understand something new about computer security.
Verbal sparring is nothing to be afraid of or to shy away from, we're adults and are staying within the guidelines here. It is known for the frequent use of metaphor. Examples of this can be found up-thread in, well, your rather colorful comments about severed limbs and heads popping off! Amusing yes, but not convincing. But not amusing enough to revisit months later - better and healthier to let it go and move on, thanks.
I've provided numerous facts and backed them up with links to sources. That is a substantial transfer of information which you didn't acknowledge. What does all that great security you describe mean for all those people not getting updates? It is a real problem.
You go on to say "That is to say, unless you purposely go very far out of your way to install custom system software that deliberately weakens critical Android security features -thus putting your Android device pretty squarely in the realm of PC-level security-, then there is no software in the Play Store that will take over your Android device." The Viking Horde malware is bad enough with the ads popping up and dangerous links appearing, whether this is 'safely' sandboxed on a vanilla install or completely taking over a rooted devices is of little significance to me. I don't want ANY of it.
I'd like a secure messaging app that can be installed on a more hardened version of Android like CopperheadOS, which does not require the constant 'phoning home' to Google that most Android phones do. Remote install capability via Google Play is huge red flag and a deal breaker for me, but I understand Moxie intends to target more mainstream users and has to make compromises to serve them.
A fair number of Android users like me are more concerned about the mass surveillance practices of advertisers such as Google than we are about the full-on 'tinfoil hat' NSA stuff. I don't like either, but the corporations are more worrying because they're attracting the better workforce with their higher pay and as a result are more effective. We want Signal to protect us from Google, not the NSA.
What initially made me post my first reply to your initial comment was that I saw it was attracting down-votes and I thought you put some effort into it and made some sound points, so I upvoted and replied. This thread has probably run it's course at this point by my email is in my profile if you have anything else to add.
> I'd like a secure messaging app that can be installed on a more hardened version of Android like CopperheadOS, which does not require the constant 'phoning home' to Google that most Android phones do.
I found this [0] today. You might be interested in the last paragraph of the comment. Enjoy!
> The Viking Horde malware is bad enough... whether this is 'safely' sandboxed on a vanilla install or completely taking over a rooted devices is of little significance to me. I don't want ANY of it.
It sounds like you'd rather be using something more appliance-like like an iDevice. Their sandboxes are substantially more strict, and their permission system is actually more fine-grained than what you find on Android. OTOH, you can do far fewer interesting things on an iDevice than an Android device. That's the Security vs. Convenience tradeoff at work.
Anyway. This has no bearing on the fact that the infrastructure and services provided by Google through the Play Store are rather good and competently managed. It certainly has no bearing on the fact that distributing software through the Play Store is substantially safer and more secure than either distributing through a Market that has devastatingly poor code signing key management practices, or -even worse- demanding that your users download and install unsigned software hosted on arbitrary sites on the internet.
The truth of the matter is that distribution through the Play Store and the App Store is absolutely the safest and most secure way to distribute software to Android and iOS devices.
> I've provided numerous facts and backed them up with links to sources.
And by and large your "facts" come from antivirus vendors attempting to drum up sales of their now-pointless-on-the-fastest-growing-sector-of-the-computer-business virus scanning software by making mountains out of teaspoonfuls of dirt.
> What does all that great security you describe mean for all those people not getting updates?
You never actually investigated whether or not Google's split of core functionality into Google Play Services largely mitigated the security impact of laggard phone manufacturers. The answer might surprise you!
> A fair number of Android users like me are more concerned about the mass surveillance practices of advertisers such as Google...
Then, uh, why are you running an OS that's authored by Google? There's a saying: "If you don't trust the vendor of your OS, then you can't trust the computer that's running it.". By definition, the author of your OS has root privileges on any device that that OS runs on.
> We want Signal to protect us from Google, not the NSA.
Signal absolutely does not protect your conversations with others if a malicious party gains root on the device on which it runs. If you don't trust Google, then running Signal on Android is absolutely the worst thing you could possibly do. Seriously dwell on that for a while.
> Remote install capability via Google Play is huge red flag...
See above. Also, because Google does not have a copy of the signing key for Android apps that it doesn't author, it is impossible for Google to install rogue versions of apps that it didn't author. [0] When F-Droid was distributing their own copy of Signal, F-Droid used the same code signing key for all apps. This meant that they (or anyone who snatched the key) could push unauthorized updates to any software on the F-Droid repo.
> ...I understand Moxie intends to target more mainstream users and has to make compromises to serve them.
Heh. You haven't understood anything Moxie has said about why Signal is currently distributed exclusively through the Play Store, have you? :(
[0] Of course, you may not believe that if you don't trust Android's app signature verification code.
> It is your assumption that the perceived threat is a 'nation-state'...
... That was the opening sentence of my paragraph that demonstrated that there is no such thing as "guaranteed security". If you think that there is such a thing, then you're going to be confused about many things when you think about security matters.
If we can agree that this sort of thing won't protect against nation-states (if you even wanted that), what exactly does it protect against that a plain TLS connection doesn't?
You've missed the reason for that opening statement.
tombrossman said "...it must be noted that this isn't a guarantee of security." [0] (emphasis mine). I used the pretty-much-worst-case attacker in the first sentence of my opening paragraph to support the second sentence in my opening paragraph, namely:
> Security isn't binary, it's a gradient.
There is not a "guarantee of security". There are only "things that a given security strategy will protect against, and things that it won't protect against". If you want to expand the set of things that a security strategy protects against, you always need to pay the costs mentioned in the third sentence in that paragraph.
Now, to address your comment:
I'm not sure what the "this sort of thing" to which you refer in your comment is. Would you be so kind as to clarify?
It's also important to remember that anyone who can compromise your Google account or put legal pressure on Google can remotely install software on your device without interaction from you, and that there have been attacks in the past that have hijacked credentials in suck a way that the attacker doesn't even need to do that.
Sure, sure. It's also important to remember that there are often chips in phones that are remotely accessible and have privileged access to the memory in the device. The consumer hardware security situation is... not the best.
> ...put legal pressure on Google can remotely install software on your device...
Given their actions in the past, I expect that Google would refuse to do this. That would be a bad precedent to set, given that Google operates in some countries with rather questionable reputations in regards to civil liberties.
I've read this blog post multiple times over the past few months and I wish I knew about it before I went to college. His career advice has so much clarity that I can't find elsewhere (so far).
Thanks for sharing the link. I'm about to do a career switch and reading this again is certainly reassuring. He is right. We are what we do for a living.
I find his work very inspirational and I hope I'll be able to personally thank him one day for all the work he has done.
If you don't mind me asking, what career are you switching from and to? I'm about to graduate but I'm not looking forward to any of my default career options. I'd be curious to hear about your experience and plans.
I'm about to graduate from medical school in a few months. Once I get my medical degree, I'm going for an undergraduate degree in Computer Science. I've always wanted to do computer science since high school but I equally wanted to become a doctor. At one point I decided being a doctor was more important. I valued autonomy highly and I figured nothing is better than the freedom to operate my own small private practice in the future. I didn't want to end up being employed as a programmer and I expected that being a freelancer is no where near as stable in terms of job prospects as being a doctor, especially on the long run. Add to that the fact that I was fascinated by how the human body works for the same reasons I was fascinated with computers. I think any hacker-minded person would. I wanted to know the ins and outs of the human body. I wanted to know how it breaks, and how to fix it. Also, to be honest, I was very tempted with the extra income that comes with medical practice, but autonomy was ultimately the primary motive.
6 Years later, here I am, graduating with a decent GPA but having 0% interest in pursuing clinical practice, although i have performed very well clinically and in terms of my medical knowledge. I simply realized that the practice of medicine (specifically diagnostics and treatment decisions) is nothing but a classification problem. We are literally trained to memorize 'algorithms' (flow charts) for diagnosing and managing hundreds of different illnesses. That's about it really. It boils down to asking a standard set of questions (the patient's history), examining the patient, and trying to guess the diagnosis. Often, lab tests are needed (you order them based on the flow chart you memorized for the presenting symptoms). Once they're available, the diagnosis is usually clear, or further testing and imaging is required, and the cycle continues. Even for the complicated medical issues, this whole process can be represented by a simple flowchart that easily fits on an A4 sheet). The majority of modern day work done by doctors can (and will) be automated in the near future. The biggest hurdle was never the technology; it's the laws and regulations. I've seen papers published in the 80s where AI bested human doctors in diagnosing many diseases, and recently I've seen more impressive results in radiology and pathology diagnostics, where the diagnosis purely depends on vision, which is a very complicated problem when you try to solve it with computers. The results are very promising and some papers have shown results where human pathologists and radiologists were outperformed by computer vision.
Humans are obviously still needed to deal with patients. It takes some clinical skill to know how to extract information from patients, how to examine them, and how to look for clues, but that does not strictly require a doctor to be done properly. In fact, most of our medical training, even in residency, is concerned with learning more and more 'algorithms', guidelines, and staying up to date with the latest medical evidence.
Computers won't replace doctors in surgical specialties, but they certainly will replace primary care physicians, as well as doctors in other fields like internal medicine (including its subspecialties like cardiology, Pulmonology, etc), emergency medicine, oncology, and others. These fields purely depend on memorizing and recalling flow charts of diagnosis and management. Humans are fallible when it comes to memory and recall, computers are much much less fallible in these cases, and virtually infallible in some cases (100% diagnostic sensitivity and specificity was reported in some studies, which is impossible to achieve by humans)
By the time I realized what I mentioned above, I was already close to finishing med school. I decided to graduate first then see what I wanted to do next. I no longer saw any inherent joy or value in being a doctor. There is no room for trying to be creative, smart, or efficient. You just have to follow the official guidelines and policies, and hope you don't get sued when you, inevitably, make a mistake. I see no meaning in a career like that.
I love coding and I love learning CS in my free time so my decision for what to do next was easy. There are some interesting studies utilizing AI for diagnosing cancer metastasis on CT scans using computer vision, and that's an example of a topic I might be interested in. I don't necessarily want to do medically related research all the time though. All I want to do is to be a software engineer working on very interesting projects. Even the most mundane coding projects are easily 10x more mentally stimulating than clinical work in my opinion.
In retrospect, I have no regrets. I know a lot about how my body works and I leanred a lot from dealing with hundreds of patients over the years. I also learned a lot about who I am and what I really want, and to me, this knowledge is invaluable. I also realized how much I appreciated computer science.
I guess that's enough rambling. Apologies for the late and long reply. I hope you found it useful, and I wish you the best of luck in your future career ;)
With people like Moxie the future doesn't look that bleak anymore. The guy is really dedicated to what he is doing and, quite honestly, it is pleasing to see someone in the tech community who is not egocentric around creating his online persona. I'm not trying to insult anyone, just expressing gratitude that there's people who care about code, not striving to become rock stars.
If that's his autobiographical documentary, I'll pass. Never seen a more narcissistic piece of trash in my life. He won't stop talking in his monotonous voice the whole time--made it to the first scene where he films a conversation, but had to cut it off because even that he had to overdub--guy loooooves the sound of his own voice.
I've respected Moxie Marlinspike ever since he made sslstrip, a simple illustration of the fundamental insecurity of browser-based HTTPS.
However I do question his premise that criminals already have the wherewithal to opt in to "clunky" strong encryption before engaging in criminal activity.
In fact there are many scenarios where criminals simply go with the default security configuration in consumer devices, either because they (a) did not plan the crime in advance or (b) aren't as smart about opsec as you might expect.
There are many good arguments to make strong encryption the default for consumer devices, but here I feel he was attempting to take an easy way out by pretending it's orthogonal to investigating crimes. In fact it is a tradeoff, granting us security from cybercriminals and bad state actors (if there's even a difference), while making it harder for law enforcement in some scenarios.
Right, not all criminals can choose secure defaults. But those who can't are unlikely to be all that secure in other ways either meaning that it won't change the threat landscape much.
Besides, it wouldn't kill our LEOs to work for their supper. This reliance on dragnet tactics means that'll soon be all they're able to do.
"security by obscurity" might proves as being good enough. I could write you a text message: "Hey abalone, do you fancy helping me planting a tree over at my garden?" and it seems perfectly innocent until you consider that planting a tree could be a code for all sort of things. As long as you don't "plant trees" every week and talk about all kind of things in your text messages you are not attracting attention.
One could argue that truly innocent/law abiding people require strong encryption the most because of all the criminals out there.
To be honest, I don't understand what substantial benefit end-to-end encryption actually brings in an environment of (almost-)mandatory updates.
- If someone from Facebook/Telegram/Signal/etc wants to know what you're writing, they can just instruct their app (via update) to send them your key. For closed-source services, you'd theoretically have to decompile and audit each update to make sure they are not doing that.
- If they want to know what you have written in the past, they can instruct the app to send them the conversation log.
- If Google (or Apple or Microsoft, respectively) want to know what you're writing, they can instruct the OS to send them the data.
(Google's "Android Backup Service" for example also backs up "third party settings and data" [1]. I don't know about the details of the backup service, but this shows to me it's quite possible that your key or conversation logs might even land accidentally on some providers' servers without them having any bad intent.)
- If (three letter agency of your choice) wants to get the data, they can just force any of the above companies via NSLs to get it for them.
- If any of the US strategic partners want to get the data, they can likely make a deal with an intelligence agency.
- Lastly, if the messenger company wants to mine or sell user data, they still have a lot of stuff that cannot be encrypted for operational reasons (such as your contact list and the phone numbers of all your contacts).
That leaves to me the only group for which "overlay encryption" brings an actual benefit political activists in a country not at all affiliated with the US - or highly knowledgeable individuals who carefully control which updates they get. Both groups are important to consider but likely had ways to protect their communication before.
To actually protect communication not just from "the government" but also from the private industry, we would at least need some independent party to vet app updates.
I think the assumption is that any sufficiently motivated attacker will find a way to compromise their target. What e2e encryption accomplishes is the end of dragnet surveillance of entire societies, something that Snowden has exposed in great detail already.
But my point is that the way end-to-end encryption is implemented currently doesn't even accomplish that: Either you trust your messenger provider not to be complicit in any surveillance or data mining activities, then plain TLS (with key-pinning) is enough to save you from being snooped - or you don't trust them, then you're not actually protected as they could push an update at any time to sidestep the encryption.
There are certain cases were current end-to-end encryption brings more security - e.g. if a provider's data center is compromised but the provider itself isn't. But those seem like edge cases to me that don't justify the attention the feature is currently getting.
It's not perfect, but E2E crypto does make it significantly harder to get to the plaintext. A backdoor will always leave some kind of paper trail (if you know what to look for), whereas intercepting the plaintext on the server cannot be detected by users at all. Legally speaking, compelling a company to intercept messages that they have access to sounds different than compelling them to develop and sign a backdoored version of their product - that's essentially the Apple vs. FBI case.
I think this is a case of perfect being the enemy of good. If a state actor wants access to your messages, you're probably screwed anyway, unless your OpSec is top-notch. E2E crypto means that they'll have to really want your messages, at which point they're probably better off just stealing your phone or using a $5 wrench.
The key is in the penultimate question. If GCHQ/NSA is interested in surveilling a specific individual, you'd need a lot more than this software to communicate securely, because there are just so many vectors.
This solves dragnet surveillance, where the government could potentially just sniff all communication and use various analytical techniques to sniff out potential criminals or people they dislike.
When I try, I get a certificate that expired 5 months ago.
www.popsci.com uses an invalid security certificate. The certificate expired on 12/10/2015 05:59 PM. The current time is 05/11/2016 01:39 AM. Error code: SEC_ERROR_EXPIRED_CERTIFICATE
"This server could not prove that it is www.popsci.com; its security certificate expired 153 days ago. This may be caused by a misconfiguration or an attacker intercepting your connection. Your computer's clock is currently set to Wednesday, May 11, 2016. Does that look right? If not, you should correct your system's clock and then refresh this page."
How are you connecting to the site? Genuinely curious as it definitely does not support HTTPS which the sibling comments confirm. Are you on a work computer with proxied MITM certificates, maybe some badly-configured security software (like Superfish), or something else? It would be good to know why this is happening.
> that has garnered praise by everyone from Snowden to filmmaker Laura Poitras
The idiom "everyone from X to Y" is supposed to demonstrate breadth of support, where X and Y are very different sources, but Snowden and Poitras are most certainly extremely similar sources.
slightly OT: has anybody made a bot or alternative client for Signal (even basic functionality)? I'd love to see a code example and was surprised that I couldn't find anything.
Well, no. You could have a situation where there's a policy in place forbidding installing GPL-licensed products. (I don't have any info about it, but I wouldn't be surprised if that was the case at Microsoft a while ago.)
EDIT: I'm getting downvotes and don't understand why. I'm seriously interested, can someone explain the reason?
> You could have a situation where there's a policy in place forbidding installing GPL-licensed products.
You might have situation with policies forbidding anything. Like, MS Windows. Or Macs. Or smartphones. Rarely, yes, but I'm surprised GPL looks that unique regarding this matter.
Weird that someone would prohibit installing GPL licenced software. It is a very different case than using GPL licenced code in development. Sounds like a dumb lawyer trying to hard cover his rear against reverse engineering accusations. Which you cannot do in this way anyway.
Pointing the finger somewhere different doesn't change the practical reality. The only question there is how likely a GPL ban is to actually affect you (seems small to me, to be honest, but I don't know).
If we are talking about probability, I have heard more times that a developer is working in a place which forbids all non-approved software from being installed and used on work computers. One web developer could not even install an alternate web browser to test the code that they wrote. Anything that is not a code snippet that is considered so small as to not be copyrightable was banned.
Of course, their market wasn't very competitive and thus they could have such policy without much risk. In more competitive areas like games, companies like blizzard has shown to use any license that is compatible with their business model, and in some cases, directly asked developer for exceptions. In that kind of market, a ideology based policy doesn't work and would only give ground to the competition.
> I'm getting downvotes and don't understand why. I'm seriously interested, can someone explain the reason?
Probably because your comment is of the form "Well, here's a ridiculously unlikely (as well as patently ridiculous) thing that I'm going to use as a counter to your argument."
Preemptive downvoter shield attempt: I'm deliberately taking a very uncharitable view of sandebert's comment in order to answer his question.
But I don't understand why my comment would be considered that ridiculous. I've worked at a company where it was not forbidden, but strongly frowned upon to use open source of any kind ("because we need to be able to keep someone accountable if the software breaks"). And on top of my mind I recall several stories by Patio11 where he talks about trying to get a piece of software in his days as salary man. And getting plain refused from his manager, because the software was way too cheap.
Agreed, none of these examples are specifically about GPL, but they would very well cover the statement I responded to: "GPL code can be used anywhere".
And just to be super clear: I wasn't trying to make a "cute" comment about a super extreme situation. I was merely trying to present a little more nuance to the statement given.
This is not a bug. It is working as designed. AppStore wants to put more restrictions on the user about what they can do with the app. This is incompatible with the GPL.
He only wants distribution via Google, and even went as far to demand that free/libre Play-alternative F-droid removed their build of TextSecure.
See: https://fdroid.eutopia.cz/