Last week’s update to the SoundCloud iOS app includes support for Dark Mode. This took several months of work and collaboration between design and engineering teams across the company, so we wanted to share our approach to implementing Dark Mode and some of the obstacles we encountered along the way.
Part of a build engineer’s role is to speed up builds. Improving build performance and avoiding work with caching is one way to achieve this, but another tool in the build engineer’s belt is that of disallowing slow builds. This is part two in a series about solving Gradle remote build cache misses.
Until recently, one of the top technical risks facing SoundCloud’s Android team was increasing build times. Our engineering leadership was well aware of the problem, and it was highlighted in our company’s quarterly goals and objectives as modularization. Faster build times means more productive developers. More productive developers are happier and can iterate on products more quickly.
Modularization is key to decreasing build times, but avoiding work is another important part of the puzzle, and build caching is one way to avoid that work. Gradle, our tool for building Android, has a local file system cache that reuses outputs of previously performed tasks. We have been using the Gradle remote build cache in order to save our developers’ time. It helps us avoid redoing work that other teammates have already done or switching to old branches. However, to get the full benefits of caching, you have to go beyond simply setting it up.
Once every two weeks, we prepare new versions of our mobile apps to be published to the app stores. Being confident about releasing software at that scale — with as many features and code contributions as we have and while targeting a wide range of devices like we do at SoundCloud — is no easy task. So, over the last few years, we have introduced many tools and practices in our release process to aid us.
In this blog post, I’ll cover some of the techniques we use to guarantee we’re always releasing quality Android applications at SoundCloud.
In the past, the Search Team at SoundCloud had high lead times for making updates to Elasticsearch clusters, either during the implementation of a new feature or simply while fixing a bug. This was because both tasks require us to reindex our catalog from scratch, which means reindexing more than 720 million users, tracks, playlists, and albums. Altogether, this process took up to one week, though there was even one scenario where it almost took one month to roll out a bug fix.
In this post, I would like to share the concrete Elasticsearch tweaks we made so that we can now reindex our entire catalog in one hour.
Last month we launched SoundCloud Premier Distribution, which allows creators to distribute their music from SoundCloud to other streaming platforms and stores. For many of our users, this will be their first experience with the conventions and requirements of the music industry supply chain. Due to strict requirements regarding metadata and media, the barriers to entry to this world are very different than those to a creator uploading to SoundCloud.
The aim of SoundCloud Premier Distribution is to make the path from SoundCloud upload to off-platform plays as frictionless as possible. Here we’ll look at how a system of automatic and manual validations allows users to get fast feedback as they prepare a release.
Although it can be easy to know if you’ve messed up badly as a manager, it’s not always as easy to know if you’re doing a good job. In particular, the power dynamics at play can make it hard for people on your team to feel confident letting you know what’s working well and what’s working not so well. In this article, I’m going to talk about an approach I started using in the last few years that seems to strike the best balance of getting the input managers need while still promoting a healthy culture of direct feedback.
In 2017, our team of six engineers wanted to try out a clean architectural pattern and decided to use VIPER. In the text below, I’ll cover how the team worked on this.
We revisited our distributed tracing setup and incorporated Kubernetes pod metadata into it, significantly enhancing our engineers’ ability to troubleshoot problems that cut across microservices.