In this article I want to discuss several architectural problems that I saw in Android applications. I believe that avoiding these issues in your codebase can spare you a considerable time in the longer term.
Note, however, that I derive this post from my own professional experience. Most probably, it’s biased in many ways. Therefore, I expect the list of issues you’ll find below to be neither exhaustive, nor representative. So, you should probably treat it as just a list of concerns to review your application for.
The order of the listed items is pretty much random and doesn’t convey any information.
Definition of Architecture
There is one major problem with writing about Android architecture: this term has no canonical definition. Therefore, it’s open to vastly different interpretations.
For example, in recent years, Google’s interpretation of Android architecture seems to be something along the lines of “a library”. I don’t see any other explanation for the fact that LiveData, Room and ViewBinding are “architecture components” in Google’s eyes. I, personally, don’t agree with such an interpretation of this term.
In most cases, I use Grady Booch’s definition of software architecture:
Architecture represents the significant design decisions that shape a system, where significant is measured by cost of changeGrady Booch
While this definition might look a bit too high-level on the first sight, it’s actually very practical. It basically says: if you can’t show how a specific decision has a big impact on the cost of future changes, this decision has no architectural significance.
Martin Fowler proposed a very similar definition (in fact, I think that these two definitions are isomorphic):
Architecture is the set of decisions that are hard to change.Martin Fowler
Due to the above definitions of “architecture”, in this article I’m not going to hand-waive my way through block diagrams and obscure words. Instead, I’ll explicitly show how each of the problems described below leads to increased cost of future changes. Then, it’ll be up to you to decide how relevant these concerns are to your own project(s).
The first architectural problem on my list won’t come as a surprise to most Android developers: God Objects. These are classes that have too many responsibilities, know too much about the system and, as a result, contain lots of source code. If you think about it for a moment, Android framework contains many God Objects by itself (Activity, Fragment, View, Service, etc). Therefore, it shouldn’t be too surprising that you can find a similar anti-pattern in many Android apps.
The first major problem with God Objects is that each time you’ll need to do anything with them, you’ll be slowed down by the sheer amount of code in these classes. It’s like a constant, indefinite tax on your effort.
For example, when I ramp-up on new codebases, I’ll usually get help from devs who maintain them. Sometimes, I’ll ask a question and want to see the relevant part of code. Maintainers will often go to specific class, and then scroll in it for more than 30 seconds until they find that one piece of logic we’re interested in. In most cases, they won’t even realize how inefficient this mode of work is. Unfortunately, I’m always painfully aware of this inefficiency because, during that back-and-forth scrolling, I lose track of the bigger picture. Needless to say that if I’d need to find that piece of logic without maintainers’ assistance, it would take much more time, so the waste would be even bigger.
Another issue with God Objects is that different responsibilities inside them often become inter-coupled in unexpected and tricky ways. Then, when you change their code, the resulting change in behavior can surprise you big time.
For example, imagine that you want to change an existing algorithm inside God Object, or extract it out into standalone class. However, then you discover that a private field where this algorithm stores intermediate results is also read by two other algos in that same class. Now, instead of dealing with just one algorithm, you need to deal with three inter-dependent algorithms. I’d been in this situation, it wasn’t fun. And that’s not even the worst case scenario. The worst case scenario (which is also a common one) is that you don’t trace the usage of private fields (because it’s clear how that ONE algorithm uses them), and then the other two algorithms break in production. If these three algorithms would reside in different classes, then they would either be independent, or, at least, you would clearly see their inter-dependencies through their public API.
Inheritance is a powerful tool, but, unfortunately, it’s also easy to abuse.
One common source for the abuse of inheritance is that developers often want to invoke specific functionality in multiple components and inheritance seems to be the simplest way to reuse code. This problem is especially acute in Activities and Fragments, because most apps will have base classes for these components in any case. Then it becomes very appealing to just put all the common functionality in them.
It’s important to understand that when you extend some class, you basically inherit all the source code of that class, and of all its ancestors. Therefore, you should consider the entire inheritance hierarchy to determine whether a specific class degrades into God Object. It’s not enough to just look at the upper-most class.
For example, that’s one of the main reasons why Activities and Fragments are so complex to understand, even before you write a single line of your own code in them. Since you’re forced to extend framework’s components, your starting point with your own Activities and Fragments are classes with thousands of lines of complex, inter-coupled code in them. Excessively complex lifecycle is one of these functionalities that you get through this forced inheritance.
So, the first problem with inheritance is that it can result in “distributed God Objects” under the disguise of avoiding code duplication. It’s even worse than regular God Objects because you’ll often need to traverse the hierarchy up and down to make sense of all that code.
The second problem is that the code you share through inheritance is only reusable within the scope of a single inheritance hierarchy, which might become a problem.
For example, developers often define dialogs right inside Activities. Later, when they discover that they need to show an existing dialog on another screen, they push that dialog’s code into their base Activity to be able to reuse it. Initially, this works and feels all right. However, in some cases, developers might discover that the same dialog needs to be shown from a Fragment, or some other component. Since dialog’s definition resides in base Activity, it becomes inaccessible to other components without either ugly hacks, or a refactoring, which can be very risky at that point. In this case, definition of a dialog inside Activity was non-optimal design to begin with. Subsequent (improper) use of inheritance amounted to planting a time bomb into the source code.
Said all that, it’s when you’ll want to modularize your app that inheritance abuse can rear its ugly head in the fullest. Modularization is about extracting classes into standalone containers (modules) which don’t have circular inter-dependencies. It’s a challenging task even if your code is already structured into loosely coupled classes that adhere to Single Responsibility Principle. However, it becomes a real nightmare if, along the way, you also need to decouple logic inside inheritance hierarchies to remove circular dependencies. The cost of bad inheritance in this situation can range from days to weeks (literally) if you’ll be dealing with sufficiently large and complex codebases.
No Clear Multithreading Boundaries
Absolute majority of code in Android apps executes on so-called UI thread. As you probably know, you shouldn’t overload this thread with work because that can lead to very bad user experience. Therefore, most real-world Android projects offload some tasks to background threads.
Multithreading is complex topic and writing correct multithreaded code takes much knowledge, experience and attention to details. Multithreading bugs are nasty and can take a long time to find and fix. However, I wouldn’t call most multithreading bugs architectural issues because they’re usually localized, so once you understand the problem, you can fix it in a reasonable time.
That said, at least one aspect of multithreading can evolve into real, long-term struggle: lack of clear multithreading boundaries. When there is no clear encapsulation of mutlithreaded code in the codebase, you can never be sure whether a piece of code you deal with should be thread-safe or not. It puts all developers working in such a codebase into a constant “between the hammer and the anvil” situation. On the one hand, if you can’t be sure whether a piece of code is multithreaded or not, it makes sense to make it thread-safe just in case. On the other hand, making the code thread-safe is a delicate and time-consuming process, so ensuring thread-safety for a big part of your app is unreasonable and, I’d even say, unrealistic endeavor (especially if not all devs on the team are fluent in multithreading). Therefore, if you don’t establish clear boundary for multithreaded code in you project, you risk being exposed to an endless stream of multithreading bugs.
For example, imagine that you change something specific in your code, and the application crashes at runtime. Or worse, you release to production and only then notice an increase in crash rate. Or, worse still, you don’t notice anything suspicious in your metrics, but app’s rating in Google Play plummets (alternatively, clients start calling). Now imagine that this happens on a semi-regular basis. That would be a big and very costly problem, don’t you think?
Ignoring Application Build Time
Build time is a common problem in bigger Android projects, but it’s rarely ever considered an integral part of app’s architecture. However, given the definition of “architecture” we adopted for this article (cost of change), build times issue definitely has its place here.
At a high-level, the problem is trivially evident and even reasonable: as Android codebases get bigger, it takes more time to build them. Unfortunately, it’s very difficult to analyze this phenomenon because the exact numbers vary greatly even between different codebases of approximately the same size. For example, see this survey where several developers shared their build stats.
For some back-of-the-envelope estimations, let’s assume that five developers work on a project where incremental build takes 20s. I’d consider that to be a decent performance for a project that requires the effort of five developers to maintain. Let’s further assume that each developer builds the project 20 times a day, on average. This amounts to about seven minutes a day waiting for the build. Not much, right? But what if these developers also unit test their code, which means that they probably build it much more often? Let’s say, 100 times a day, on average (totally reasonable number for TDD). Then you get about 35 minutes per day per developer. In this case, they waste a considerable chunk of their work day to just wait for the builds.
I want to once again emphasize that there is no such thing as “representative build time”. If you clicked the link I posted above, you probably saw that there are bigger projects which do much better than the 20 seconds I assumed in the previous paragraph. However, on the other hand, there are also relatively small projects that do much worse. Therefore, there is definitely a lot of room for maneuver in this context.
In my opinion, the simplest way to improve your build times is to invest in better hardware. However, this route has its limits and it’s not always a viable option either. There is no shortage of individuals and startups who wouldn’t like to make, or simply can’t afford, this additional investment. If upgrading hardware further is impossible (due to reaching either technical or budget limit), then optimization of build times becomes tricky.
One of the most underrated and overlooked ways to keep your build times at bay is to be very picky and careful with tools you use in Android projects. In my opinion, developers are often too quick to bring new libraries, frameworks and even programming languages into their codebases. Some of these have major negative effect on build times.
A word about early modularization in this context.
Many developers believe that early modularization is the ultimate and the simplest approach to have short build times. Some even treat it as “free optimization”. This technique indeed has its merits, but it’s very far from being free. It takes effort to modularize and maintain modularized app even if you don’t need to refactor legacy code. In addition, preliminary modularization can have its own problems.
All in all, there is no simple answer to build times. If you don’t pay attention to this metric, you can waste much time waiting for builds and will probably need to address this issue anyway as the project grows. If you adopt some “silver bullet” (like heavy modularization from the onset), you can waste much time doing that for a questionable gain (there are even examples where modularization made the matters worse).
Therefore, my recommendation in this context would be to be mindful about this aspect and treat it as integral part of your app’s architecture. You can even decide (with other team members) on some upper limit for this metric and then address the problem every time you reach that limit.
I touched upon the topic of unfortunate modularization in the previous section, but it deserves a discussion on its own.
Pretty much every time I ask developers “what’s wrong with your code?”, I get very accurate and detailed information. After all, it’s only reasonable that developers who work in a codebase every day would be aware of the parts that make them struggle. But if that’s the case, then why so many projects deteriorate to the point of almost stalled progress and even rewrite? Why don’t developers refactor their code incrementally as new information and new requirements expose its limitations?
The most common answer to the above question is “we don’t have time to refactor”. That’s true in many cases, but it’s only part of the story. The other part is that many developers don’t know how to refactor their code to a better state even after the problems present themselves. Yet another part is that some developers fear to refactor. I’m not here to judge anyone and I actually think that developers have all the reasons to be wary of refactoring because it’s hard, it’s time-consuming, it often goes badly and can result in many bugs. However, in my opinion, it’s also the only viable way to keep the project going in the long term without massive cost overruns.
Now, if we add modularization into the picture, all the above arguments become much stronger. Let’s put it this way: even if developers would get all the evidence in the world that their app is modularized along incorrect boundaries (e.g. domain modules don’t reflect up-to-date business domain understanding), they’ll have good reasons to avoid refactoring of modules. I would argue that incorrect modularization is the extreme case of a wrong abstraction which Sandi Metz discussed in her outstanding article. This way, incorrectly identified modules become a liability and either slow the project down, or require a considerable refactoring effort to fix. It’s yet another unfortunate “between the hammer and the anvil” situation that you wouldn’t want to find yourself in.
Therefore, postpone modularization until you have as much information as possible and then approach this task with the assumption that the modules you define will stay in the codebase forever.
Assuming a Single Screen Size
This architectural problem is probably irrelevant to most applications out there because no one in their right mind would release Android app which assumes a single screen size to Google Play. However, I did encounter this issue in several apps that targeted specific devices (kiosk tablets).
See, when you develop an app and it’s known that it’ll be launched on just one device, it sounds reasonable to forget about other form factors and screen densities. The problem, however, is that it’s very probable that, at some point, the app will actually need to be adapted to other devices.
Anecdotally, out of three clients that I worked with that developed such apps, two wanted to migrate to other tablets and one wanted to actually port their app to phone. Three out of three might be a coincidence, but I don’t think so. I believe that it’s just a reflection of the fact that business requirements change over time and technology evolves. Therefore, in my opinion, it’s simply unreasonable to assume that the app will target just a single device forever.
Now, some developers and managers might say that they are willing to sacrifice some potential effort in the future to move a bit faster in the present. That’s fair. However, it seems to me that many overestimate the effort required to keep the option of supporting more screens open, while, at the same time, greatly underestimating the downstream problems. It can take weeks of development and then more weeks of back-and-forth with QA to adapt an app which is hardcoded to a single screen. So, if you think along these lines, make sure that the potential gains today justify the long-term risk.
Not Using MVx, Dependency Injection and Package-by-Feature From Day Zero
In my opinion, Model-View-X, Dependency Injection and Package-by-Feature are the three most important architectural patterns to adopt when you start a new Android project. In fact, they are so important that each one of these patterns deserves a section on its own. However, since I already wrote an article about them, I decided to just provide a link to that previous article here. Read it. It’s very important to get this stuff right when you start a greenfield project.
Singletons is one of these topics which, when discussed, is pretty much guaranteed to result in a full-fledged flame war. I even contemplated whether I should just omit it from this article, but decided that it wouldn’t be honest on my part.
The benefit and the appeal of Singletons is that it’s so easy to use them to share state. But that’s also their main problem. Since Singletons basically represent global static state, any code in the app can access them, at any time. This often results in flows that span multiple unrelated components from different parts of the application. These flows are extremely hard to trace, and they introduce global timing constraints between seemingly unrelated code (so-called “temporal coupling”). In addition, the aforementioned flows can easily become circular, visiting one more Singletons repeatedly.
The cost of Singletons is not something you see right away. Initially, they might even seem as simple and efficient design decision. However, in the longer term, you discover that you’re forced to look into many different classes to build a mental model of app’s flows. Not only that, but you’ll also need to build a temporal map of what’s going on in many cases. And even then, after a thorough code review, once you change the code in any way, you can still get unexpected side effects in a distant and seemingly unrelated part of the app.
Just like with inheritance, you’ll get to feel the full extent of Singletons-induced problems if you try to modularize a codebase that uses them. What you’ll find out is that when you’ve got multiple inter-dependent Singletons, they behave like a single distributed God Object. You can’t extract them into a standalone module one-by-one without much additional tricky refactoring.
Now, I can totally imagine a codebase where very limited number of Singletons are used very carefully to implement some specific functionality. However, in practice, I’m yet to see project like that. So far, all projects that used Singletons that I saw ended up with excessive, obscure, hard-to-trace, circular coupling.
So, these were some common architectural mistakes that I encountered in the wild. I remind you that this post is neither exhaustive, nor necessarily representative. Treat it as a helpful list of pitfalls to be aware of when you write your own Android applications.
By the way, if you’d like to read about lower-level mistakes in Android code, I highly recommend this post by Gabor Varadi.
As usual, thanks for reading! You can leave your comments and questions below.