Connection Quality Specific Downloading

Thinking about app download speeds, have you worked on a project where, having completed an app, a few people with what I call ‘intermediate quality’ connections, neither good nor bad, complain it downloads data slowly? The problem with intermediate quality connections is that they don’t complete fast and also don’t fail quickly due to not hitting timeouts. If you are downloading lots of things, slowly, the consequence of ‘only just’ not hitting http timeouts can result in the user perceiving downloads as slow.

The question then often becomes can we download different quantities or types of data for different users? Users on high end phones and good connections might be served richer data or more data at once while those with poorer connections might be served minimal or indeed no data. The problem is that it’s very hard to quantify all this and base your functionality on quantitative data.

Facebook has just made this problem a lot easier on Android because they have open sourced their Year and Connection classes. These can provide insights on how use varies with network performance and can allow you to vary functionality based on that performance.

Incidentally, Facebook have also just open sourced Fresco, a new image library for Android. Interestingly, it uses NDK (c native) memory techniques to store bitmaps, a technique I also happen to have used on a past project. I used this technique on a ‘kiosk’ single use device where it wouldn’t affect the memory available to other apps (because there weren’t any others). I am not so sure whether it’s right to use it on a general purpose device unless you carefully limit how much native memory you actually use. Jumping back to quality of connections, Fresco also provides for lazy loading of images from the network in a fast and smooth way.

Collaborative Battery Use Analysis

carat.pngThere’s an interesting free app for iOS and Android called Carat created as a research project by UC Berkeley and the University of Helsinki that performs collaborative analysis of battery use as well as personal recommendations for your particular device.

The statistics page shows some pretty, interactive the results from the 760,000 people who have installed the app. The public admonitions of particular apps are a great incentive to keep your app out of the analysis of energy-intensive apps. I am not sure why it’s called Carat though given that their icon is a carrot.

Mobile Latency

wifivslte.pngIlya Grigorik, developer Advocate at Google, has an interesting new post on his blog comparing the latency of WiFi and 4G networks. He concludes that WiFi can deliver low latency for the first hop in the network if the network is idle. 4G requires more coordination between the device and the radio tower but can provide more predictable performance.

He recommends apps don’t trickle data and instead aggregate network requests to both reduce energy use and reduce the overall latency.

I have previously written about chatty APIs, a call for more energy efficient apps and latency.

Android NDK Performance

android.gifAndroid apps aren’t just implemented using Java. You can also compile and run c/c++ (NDK) code. This obviously runs faster but how much faster? With newer Dalvik implementations over time, how much faster is c/c++ now?

Learn OpenGL ES has a useful post that compares the speed of a digital signal processing (DSP) filter implemented in Java and c. It shows, in this instance, that Java runs 17.78 times slower than c. However, what’s more interesting is that it’s possible to manually modify the Java code to do things, such as inlining function calls, to get the Java to only 2.87 the speed of c.

The problem with such Java optimisation is that it makes the code significantly less readable. However, I can see the case/opportunity for a Proguard style tool that optimises specifically for speed.

Note that the NDK isn’t just used for improved performance. I have also used it for…

  • Portability – To allow compiling of large open source and particularly proprietary libraries of code not available in Java
  • Increased Memory – To allow access to more memory than is available on the Java heap for example to manipulate whole large images that can’t be loaded into Java
  • Deeper Device Access – Access to particular OEM APIs, for example hardware jpg encoding/decoding, not available from Java

Using Unused CPU Cycles

boinc.pngResearchers at Berkeley Open Infrastructure for Network Computing (BOINC) have developed an Android version of their software to allow BOINC projects to tap unused processing power donated by computer owners around the world to analyze data or run simulations that would normally require cost-prohibitive supercomputers.

The app only runs when the phone is charging and more than 95 percent charged. It also only communicates with computing projects when connected via WiFi, to avoid burning through users’ data plans.

This made me think about other ways mobile devices and even desktops might cooperate to solve problems. Devices might, for example, relay Internet connectivity to one another, aggregate Internet queries to conserve power or provide for alert style services from desktop to mobile. Your desktop might do things that your mobile can’t do or doesn’t want to do for performance or reasons of battery conservation.

Top App Frustrations

apigee.pngApigee has announced findings of its 2012 Mobile App Review Survey of over 500 mobile app users. 96% of app users say there are frustrations that would lead them to give an app a bad review, including…

  • Freezes – 76%
  • Crashes – 71%
  • Slow responsiveness – 59%
  • Heavy battery usage – 55%
  • Too many ads – 53%
It’s my belief that too many companies ship an app and think the mission is accomplished. If you are serious about user retention you need to have analytics on the user experience. These analytics not only help pinpoint problems but can also be used to fine tune functionality to maximise retention (and possibly monetisation). Some companies ship apps with analytics but never look at the statistics. Successful companies/apps factor in effort to analyse and iterate.

Latency and Wait Time

carpathia.pngCarpathia has a thought-provoking infographic on Mobile Content Usage and Expectations. They reveal how long people are willing to wait for data to load in a website or mobile app…


The infographic goes on to explain how the wait time is related to latency and the proximity of the server to the user. While this is partly true, it doesn’t correlate with my experiences of having worked on lots of apps that access the server.

First of all, you don’t get 5 secs latency just because the server is on the other side of the World. The latency of the Internet itself tends to be of the order of hundreds of milliseconds. If the server is close then you can get low 100s of milliseconds or less, the other side of the World might give you mid to high hundreds of milliseconds. In practice, it doesn’t matter hugely where the server is located. What matters more is the latency of the mobile network and the speed of the server in processing multiple requests. Large companies often split servers across geographic locations for resilience, legislative data protection issues and reasons of scaling – and rarely primarily due to speed of access.

Also, not all slow apps are due to the network latency. Accessing storage can be slow. However, where apps do access the server, aggregating server requests can reduce the wait time as well as save power and reduce the need to scale the server.

Server Request Failure

errormessage.gifLast week I wrote about Mobile Cloud Computing and potential problems related to server throughput. Another problem mobile developers need to think about is what to do when a request to the server fails. You need to try again at some time but when?, how many times? and at what point do you give up and ask the user to ‘try again later’?

There’s a classic situation that when a server request fails, the app (or sometimes the user) re-tries too soon and this, in itself, can compound the problems because the server ends up with too many requests to handle that prevents the server from recovering.

However, the above is an extreme and rare case. We have to remember that most of the time the failure is because the phone is outside data coverage and the request never reaches the server. In this case, trying again probably won’t work anyway. Then there’s the added complication of whether the request is related to something the user has done (and is waiting) or whether it is being done in background.

In almost all cases where the user is waiting, it’s usually best to pass the retry back to the user. This way they can position themselves so as to get better data coverage. In cases of background requests, if the operation isn’t time critical (the app or the server doesn’t need the data soon) then it’s sensible and easy to wait a relatively long period (hours) before retrying. If the data is time critical then you usually need some kind of backoff strategy to prevent a) depleting the battery due to continually trying and b) if the request is getting through to the server, prevent the server from being overloaded. Most backoff strategies involve increasing the time between successive retry failures and also sometimes having some kind of ‘give up’ strategy where the user needs to be told connection isn’t possible. Having said all this there isn’t ‘one case fits all’ and it’s down to your apps’ particular circumstances.

This kind of stuff is usually left to the developer. However, in many cases, it has side effects on the user experience, the server side and answers provided by end user support. It’s best done upfront as part of the requirements/design but even when it isn’t it should be documented openly somewhere so everyone knows what should be happening.