The Android development landscape is changing at a neck-breaking pace. Emerging technologies change the way users interact with their devices, but they also improve the development process. It’s not an easy task to guess what the next few months, let alone the next year, will bring to mobile. We decided to give it a go and gathered five senior Android developers who talked about the future of mobile development and what we should expect to happen next year. Our experts: Maciej Janusz, Mikołaj Lenart, Paweł Bocheński, Marcin Oziemski share their Android development predictions for the upcoming year 2019.
Voice control in mobile apps is becoming a new user interface that we need to take into consideration when designing and developing applications. It gives you a lot of possibilities that are still to be discovered.
We can all agree that talking to your phone still feels unnatural and users need to get used to it. This technology is a bit like video calls: they are great and it’s incredible that you can see your family and friends from another part of the globe, but you do not use it always and everywhere. When designing voice controls in apps we also need to remember about other means of interaction, like writing or selecting things on a touch screen.
Assistants are getting more and more popular. We can see that every big player has one (Siri, Google Assistant, Bixby, Alexa, Cortana), but also that assistant features are being implemented inside apps as chatbots. We can see that growing popularity of Assistants will create a new ecosystem of apps that are build for specific assistant. The first step in this direction are “Actions” and “Slices” introduced at Google I/O, which enable apps to integrate with Assistant directly on the device. You can say “Hey Google, add cheese to my shopping list” and the assistant will send information to the correct app on your device. We predict that majority of apps will still have a traditional interface, but to keep in the game they will also need to add assistant integrations.
Chatbots can be treated like small assistants inside apps. Thanks to solutions like DialogFlow we are able to seamlessly add chat flows to apps without much coding. This can be a very good solution for customer service inside app, but it also opens new ways of interacting with users, like in the “Shine” app featured in the App Store. The Chinese app WeChat is another messaging app with many chatbots integrated.
After years of silence, Google issued architecture guidelines on how to build the best Android apps. Although you are not forced to use Android architecture components, it’s a good starting point to build stable apps. The era of arguing about which pattern is best for Android - MVC, MVP, MVVM or something else - is over, and we can trust that solutions from Google are good enough for the majority of apps. This will result in more stable apps and less confusion in the developer community. Also introducing a new developer to your team will probably be less time-consuming.
We have a lot of apps installed on our devices but we only use a few of them on a daily basis. With the growing popularity of IoT devices in smart cities, on-demand apps will be used much more often. The majority of apps will have parts of their functionality implemented as Instant modules. Full functionality will be available after installation. Progressive Web Apps will also become popular, especially in e-commerce.
After years of discussions among developers about how to properly implement multithreading on Android, and problems with tools like AsyncTask or EventBus, we now have stable solutions supporting developers in safe multithreading management. Right now we can choose between RxJava, Kotlin Coroutines or Android LiveData. However, this also raises the question which technology is the best. The most important thing is that all three of them are mature and stable solutions that help developers write clean code.
React Native started a trend of hybrid apps that share code between platforms. The idea is very tempting, and other companies are going in this direction. Other promising and worth watching solutions are Flutter from Google and Kotlin Native. Each of them is designed for different use case, but the idea of code sharing is evolving and we can expect maturity in the coming years.
In 2017 Google switched from a mobile first to an AI first strategy. We can see the results of this shift in the growing popularity of Tensorflow and the introduction of MLKit in the Firebase ecosystem. Creating basic models becomes simpler everyday, and you do not need expertise in data sciences to make your app intelligent. Thanks to Google’s strategy people are becoming more aware of the possibilities of using machine learning in mobile development, and seeing that it’s not as scary as implementing everything from scratch in Matlab or R. We predict that Machine Learning will be crucial not only in image and speech recognition, but that it will also be used for prediction and analysis of user behavior.
It looks like a lot of companies are investing in AR, but nobody really knows how to use it to bring value to users. At the moment AR is really cool for games and headset apps, but it’s not very handy to go around with your phone and look at the world through camera. AR has huge potential, but until somebody figures out how to make it seamless in use it will stay just an entertaining oddity.
The upcoming year will be very interesting for Android development. We can observe a lot of new technologies emerging that will shape the future of mobile development. As developers, we need to stay up to date with those trends and learn how to implement them in new products.
The future is for sure bright: we will have even more good quality apps with even more engaging user interactions. We will also have more stable solutions to build apps, which will result in better products. The most important thing right now is to closely observe new trends and invest in mastering the skills that will be the most important in the future.