Two days ago I repeated my advice on having to be careful when using WebViews in security sensitive applications. Yesterday, I happened to come across a research paper (pdf) (and presentation) “Insecurity of Mobile Banking… and of other apps” by Eric Filiol and Paul Irolla of ESIEA Operational Cryptology and Virology Lab that recently became available at blackhat Asia 2015. In the research paper it says…
The authors are French so we can excuse them that the above doesn’t read that well but I think what they are saying is that because banks already have a secure web site accessible via the browser most are re-using it for use within their mobile apps.
I guess the assumption banks are making is that any checking such as login, intrusion detection etc is already there and proven it’s best to re-use these mechanisms as they are considered to be secure. Unfortunately, that assumption is very wrong. In using WebViews to render server side screens they have opened themselves up to the area of the phone software that has the most (and most complex) vulnerabilities. For example, the login might be on a web page on some server but by rendering a remote page there’s the possibility of extra unwanted things being loaded that might do literally anything such as log keys and read data in files or memory. I have also yet to see a reliable implementation of certificate pinning that works with webviews.
The authors also claim…
“Furthermore, there is at least one vulnerability affecting webview. This vulnerability has not been disclosed by Google and consequently Google will not publish any security patch to correct it for version 4.3 and prior versions. It is therefore a major vulnerability which allows a third party to take control over the phone.”
You can read the paper (pdf), slides (pdf) and a similar presentation (video) from December that analyses some well-known banking apps to show they aren’t that secure. If you really must use WebViews, you might also read my guidelines on how to be careful with WebViews on Android. If you are concerned using a banking app yourself then you can read my basic advice of consumers.
In the past I have mentioned the need to be careful about using WebViews in apps, particularly apps that are security sensitive. The number and complexity of WebView vulnerabilities are such that a pragmatic approach might be to not use WebViews in security sensitive apps. Recent news has shown that the same vulnerabilities can be a security problem for the Chrome web browser app itself.
If you have been following the IT news, particularly the security news, you will know that the Italian spyware company Hacking Team recently got hacked themselves and their source code was posted on the Internet.
It turns out they developed an Android ‘remote2local‘ exploit that cleverly combines three known Chrome vulnerabilities and the root-enabling put_user or TowelRoot vulnerabilities to allow pre-defined code to be executed as root from the user simply clicking on a link in the browser. The details are on the Tancent blog (Google translated).
How bad is this? The Hacking Team have a compatibility file that says it covers Android 4.0 to 4.3 and lists some tested devices…
One of the vulnerabilities, CVE-2012-2825 Arbitrary memory read, was fixed in June 2012 and another, CVE-2012-2871 Heap-buffer-overflow was fixed in August 2012 so end users allowing Chrome and WebView to update via the Play Store will have had these vulnerabilities fixed a long time ago.
However, this demonstrates how vulnerabilities can be combined to run code as root without the user even knowing. The Hacking Team compatibility file and subsequent vulnerability fixes show that in some ways, Android’s fragmentation aids security. It’s difficult for exploits to cover all types of Android OS and device and they usually only work on smaller subset of devices. As I previously mentioned, this won’t be much consolation to large companies with thousands of employees or customers which greatly factor up the chances of encountering exploits that might end up accessing sensitive company information.
Yesterday I wrote about how we shouldn’t necessarily ignore malware. GDATA has new research into current Android malware. They also have a free report (pdf). There are about 4900 new malware samples every day – that’s a new malware sample every 18 seconds.
About 50% of the malware is financially motivated and is attempting to steal financial details, send premium SMS or locks the device (ransomware).
If you are an Android user, you might want to read my advice for consumers. App developers should read my guidelines on securing data and code.
NowSecure sent a tweet today saying the chances of encountering malware is very low and that the chances of apps leaking sensitive or personal information is very high and that’s the real problem.
I suppose that depends on what type of user you are or whether you are looking at the risks from the point of view of a company supplying a service through apps. Some kinds of user are more promiscuous, root their devices and obtain apps from 3rd party app stores where there’s a much higher risk of malware. Also, large companies with thousands of employees or customers greatly factor up the chances of encountering malware that might end up accessing sensitive company or personal information.
There has been lots of press about the recent Samsung keyboard vulnerability. The vulnerability comes about because a new language can be downloaded under a privileged context which can be network hijacked to run arbitrary code.
Many articles mentioning this vulnerability are over sensational because it’s very unlikely that the user/app would download a new language pack AND be on a hijacked network taking advantage of the vulnerability. However, I think it’s more useful to pull apart the vulnerability and look for simple learnings to apply to other existing and new apps.
There are three main problems…
It’s very easy to reverse engineer most Android apps using dex2jar, JEB or Dare and there are even online tools that can reverse engineer an app without having to install any tools.
Each tool has its own limitations and those limitations are often used by other obfuscation tools to make reverse engineering more difficult. However, to make things even easier, a new tool enjarify has been ‘unofficially’ released (presumably means there’s no support) by Google that claims to resolve many of the limitations of dex2jar such as support for unicode class names, constants used as multiple types, implicit casts, exception handlers jumping into normal control flow, classes that reference too many constants, very long methods, exception handlers after a catchall handler, and static initial values of the wrong type.
I can’t see why Google would want to release such a tool other than it’s the result of a Googler’s 20% ‘free’ time. It will probably encourage more copied apps, ip theft and thwarting of in-app purchasing.
However, it does serve to emphasise how much more sophisticated and easier decompilation has become over time. You shouldn’t rely on the fact it’s difficult to do nor assume what might have protected your app in the past will protect it now or into the future.
A common question I am getting from clients at the moment is “How do I protect the (Java) source code” in a shipping app. The short answer is you can’t. No matter what you do, a very determined hacker can recover something that resembles your code. However, you can make it much more difficult. I have written a lot about obfuscation and re-packaging on my Android Security site. You might also like to read about using the NDK and tamper prevention as it’s also possible the recover the code from optimised dex/oat files and even from memory.
The thing with this and many other aspects (e.g. UI design, testing) of mobile development is that the chosen strategy should usually depend on the actual project. Some developers tend to be dogmatic and mandate ‘this’ and ‘that’ approach but do no listening, questioning or assessment. There are many ways to do things and some might be better than others for a particular project or might indeed not be worth doing at all.
Yesterday I posted about company app, platform and device preferences where Good Technology identified that iOS remains the most used device by enterprises (companies). One of the reasons for this is that iOS is perceived to be more secure than Android? But is this true?
About a year ago I posted how Marble labs found that iOS and Android were equally vulnerable to attacks. More recently, Adam Ely of Bluebox had a post on the Bluebox blog asking ‘iOS vs Android: Which is More Secure’. He explained that while iOS might be perceived to be more secure, it has had more vulnerabilities. He also talked about the Android and iOS sandboxes and correctly concluded that…
“With jailbroken devices, counterfeit devices and vulnerabilities, we have to assume in many cases, especially in BYOD environments, that the underlying operating system will be breached just as we assume with the operating systems on our laptops and servers”
The solution to the security problem doesn’t come from answering the question which platform is more secure. As I mentioned last month, you will never have complete app security. We can’t trust any end point and have to instead concentrate on protecting the sensitive data appropriately and as best we can.