In the world of app development, performance optimization is a never-ending, yet necessary, job. One that can be fairly rewarding - and not only in terms of developers’ personal satisfaction... First and foremost, better performance very often leads to higher conversion and user retention rates, which is directly connected with profit gains.
For example, when Pinterest made some moves to rebuild their app for performance, they managed to decrease wait times by 40% and increase sign-ups by 15%. They even noticed a 15% increase in search engine traffic! Similarly, BBC found out that for every additional second their site takes to load, 10% of users give up and leave. Time is money, and the numbers prove it.
This is why we strongly advocate betting on the high performance of Python - combined with the Django framework - for so many different projects. But even within this powerful combination, there is always room for improvement. So, today I want to share a short cheat sheet with tips on how to make your Python/Django app work to bring you even better results.
There are a few things you may need to keep an eye on, if you want to get the best performance from your database. Within the SQL server, you need to take care of creating and running:
and stored procedures.
And in order to avoid creating requests every time an app needs to address a database - it would be wise to enable a persistent connection. All of these will significantly cut down the response times.
If you want to find out which part of Python code you should certainly optimize, you need to make use of one of the available profiling and analyzing tools. The profiling module will allow you to check the performance of certain lines of functions and help you locate any hotspots - which may not be all that easy to find just by reading the source code. There are a lot of Python profilers you can choose from (like cProfile or line_profiler), and also some other tools you may want to leverage later on for visualising the results (like SnakeViz or KCachegrind).
At this point, it’s pretty important to enable Django’s cached template loader. Why? Because this will allow you to store the template content in memory. This means that you can avoid searching for a match in the file system every time the template is requested. Instead, the cached loader will return a prepared result whenever it’s needed - directly from memory. Plus, you should also bear in mind that all static content should be served by an HTTP cache server.
In effect, the performance of this part of your app won’t be so dependent on the number of requests coming in, and you’ll be able to scale your project in a more optimized manner.
Being lazy is not always a bad thing. Actually, lazy evaluation tools can help you achieve significant performance gains… Laziness complements caching by delaying computation until it has to be done. Otherwise, any work is generally avoided, saving both time and effort. When it comes to Python, you’ll want to make use of constructs like the generator and generator expression.
Within the Django framework, there are already some lazy QuerySets. You can create a QuerySet and pass it around for as long as you like without involving any database activity until the QuerySet is actually evaluated. Only then will Django run it. Django also provides you with a keep_lazy() decorator, which modifies the function so that if it’s called up with a lazy argument, and the evaluation is delayed until it’s required.
You need to optimize Django sessions in order to process requests faster. Instead of storing user sessions in a slow database, which is the default mode in Django, you would be better off storing session data in memory. Just use the command “SESSION_ENGINE = ‘django.contrib.sessions.backends.cache” and relish in the improved performance that your app will get from a cached-based session backend.
There are two functions you should use that are designed to cut the number of SQL queries:
select_related() - “this is a performance booster which results in a single, more complex, query” - utilizing it also means that the later use of any foreign-key relationships won’t require database queries;
prefetch_related() - this allows users to prefetch many-to-many and many-to-one objects and, as a result, improves the performance of the entire framework.
It is also worth noting that SQL views should be used for data collection, especially where count or group data is required.
Although this isn’t always 100% guaranteed, every new release of a well-maintained programming language or framework is usually more efficient, complex, reliable, and secure than any of the previous versions. You may want to try the latest available Python package or Django release, but remember not to base your expectations on the above assumption. Each case is different, so measure and test rather than guess. And remember - you should proceed with newer versions only if you already have a pretty well-optimized Python/Django app.
Optimizing an app usually means speeding it up a bit, lowering memory consumption or easing the database load. Sometimes, if you manage to significantly improve one thing, you may also expect improved performance in another. Other times, for example, accelerated speed may use much more memory and have a devastating effect on the entire program. Therefore, you should always pursue the right balance between the different goals you want to achieve and the methods that you use. Always bear in mind the trade-offs that you’ll need to face and remember that each improvement has to be worth your time and effort - these are your most valuable assets.