As you may know, I work at Flow, a project management app that has a lot going on under the hood. It’s a big app.
We’re a small team. And we’re a busy team. We try to make time for maintenance and improving infrastructure, and I think we do a fairly decent job of it, but there’s still this constant nagging fear about app performance.
Having an app with poor performance is the most embarassing kind of sin to me. I read articles all the time by very smart people who talk at length about all the ways to make sure your app is performant, and I cringe whenever I think about anywhere that we’re falling short.
But improving performance isn’t a one day or a two day job, and some changes can’t be made all in one go, as much as we’d like to, so we have taken an incremental approach to making things a little bit better at Flow, one step at a time.
There’s still lots to be done, but I thought I’d list some things we already have done, or things that are at least underway.
We stopped loading every single task on boot
This was such a killer, both on our end as well as on the poor API. For various reasons, when the app was first built, it was built with the assumption that the web client needed every single open task on boot. Do you have any idea how many tasks some large teams have? Thousands. Imagine – every time someone opens Flow we have to request thousands of tasks before we can launch the app. The API request was enormous and therefore relatively slow, and the client would also then have to parse all that JSON, which was not an insignificant amount of work either.
Ideally, we'd only fetch the tasks we needed for a specific view, and we’d also paginate those tasks.
We don’t live in an ideal world though, so for now only one of those wishes has been granted: Tasks are still not paginated (so if you navigate to "Assigned to me" we fetch all tasks assigned to you in one go, not scroll-to-load-more as I secretly fantasize aboout), but we did change the way the app works to not assume it always has all tasks available to it, and to instead request tasks as needed when the user navigates around the app.
The API team is very happy with us, and app load times got a nice boost from this change.
We switched to Webpack
For a long time, our app was still only served in 4 large chunks, but it was a big step in the right direction, and set us up for our more recent change:
We started being thoughtful about our Webpack chunks
After downloading Webpack Bundle Analyzer and taking a look at how our chunks were constructed, we begain to stategically change how we required JS in order to minimize how much was loaded at boot.
For example: We used an npm module called video.js and I was surprised when looking at the bundle analyzer to see that it was one of the largest dependencies in our
node_modules folder. And yet we barely even used it! The only place we needed it was for playing videos in the light box, and I’m sure many of our customers never even upload video files, or at least do so rarely enough that it doesn’t merit ALWAYS loading the JS to handle it.
video.js is only fetched and loaded if a user launches a light box containing a video.
Without the analyzer, I never would have even realized what a beast that one module was.
We are aggressive with
If you google this you’ll find articles about why you should just let React handle DOM reconcilation and not try to be smart about it, because React is already fast at rendering and you’ll only make things messy and complicated by defining your own
I’m sure these people make a good point in many cases, but I am here to tell you that that philosophy did NOT work for us. React rendering may be relatively quick, but it is not fast enough when you have a page with hundreds of components. Defining strict SCU methods gave us a huge boost in app performance.
You do have to be careful. You have to make sure you never mutate objects or arrays, and it’s best to avoid deep equality checks whenever possible and instead construct your data/components in a way that a simple
=== will suffice. But the payoff has been huge for us, and I stand by our decision to “abuse” the powers of
We are also cautious about component mounting
Much more expensive than component rendering is component mounting. This is something we still struggle with and haven’t perfected, but on some of our views we have implemented an infinite scroll logic where we don’t mount components until you start to scroll down the page, in order to save on initial page load times.
We are mindful of avoiding situations that may cause DOM thrashing
This has been less of an issue in recent memory, but there was a time where, if you loaded the app and had the projects in your sidebar sorted differently in your cache vs what the API gave us after we fetched the updated information, an event was triggered for every single project whose sort position had been updated. And for every single event the sidebar would re-sort and re-render the newly ordered projects. And on teams that had enough projects, this was enough to cause the browser tab to crash.
We are careful now that when we’re fetching large chunks of data to be thoughtful about whether we should trigger an individual event for each thing that changes. In the case of sort order changs on boot, individual events weren’t necessary – we could trigger a single event after all the projects had been updated, and allow the sidebar to re-sort and re-render once, instead of N number of times.
We are conservative and careful about inlining images in our CSS
Having some images inlined has been nice for us, in terms of having certain important icons being available immedietely.
But we try to be mindful of how often we inline images, and we also make sure we never inline an asset more than once. By using the SASS placeholder selectors we define inlined assets once and extend that placeholder selecter wherever we need the image. SASS handles the complexity of creating a ridiculously long CSS selector, and the way the placeholder selectors works means that the image is only inlined once, instead of every place it is used.
You could do something similar without SASS - having a dedicated CSS class to apply to elements for each icon you want, but code-wise the placeholder selectors have been very nice to work with.
We are always trying to think up more ways to improve app performance and load times
It’s an uphill battle, and there’s always lots more to learn and lots more to do. But every step we take feels good, knowing that we are improving our customers’ lives a teeny tiny bit each time. They may never even notice or realize it, like they would with a new/improved product feature, but I feel warm and fuzzy all the same 😊