SoundCloud for Developers

Discover, connect and build

Backstage Blog RSS

You're browsing posts of the category Architecture

  • June 13th, 2014 Scala Finagle Ruby Architecture Building Products at SoundCloud—Part III: Microservices in Scala and Finagle By Phil Calçado

    In the first two parts of this series, we talked about how SoundCloud started breaking away from a monolithic Ruby on Rails application into a microservices architecture. In this part we will talk a bit more about the platforms and languages in which we tend to write these microservices.

    At the same time that we started the process of building systems outside the Mothership (our Rails monolith) we started breaking our large team of engineers into smaller teams that focused on one specific area of our platform.

    It was a phase of high experimentation, and instead of defining which languages or runtimes these teams should use, we had the rule of thumb write it in whatever you feel confident enough putting in production and being on-call for.

    This led to a Cambrian Explosion of languages, runtimes and skills. We had systems being developed in everything from Perl to Julia, including Haskell, Erlang, and node.js.

    While this process proved quite productive in creating new systems, we started having problems when maintaining them. The bus factor for several of our systems was very low, and we eventually decided to consolidate our tools.

    Based on the expertise and preferences across teams, and an assessment of the industry and our peers, we decided to stick to the JVM and select JRuby, Clojure, and Scala as our company-wide supported languages for product development. For infrastructure and tooling, we also support Go and Ruby.

    Turns out that selecting the runtime and language is just one step in building products in a microservices architecture. Another important aspect an organization has to think about is what stack to use for things like RPC, resilience, and concurrency.

    After some research and prototyping, we ended up with three alternatives: a pure Netty implementation, the Netflix stack, and the Finagle stack.

    Using pure Netty was tempting at first. The framework is well documented and maintained, and the support for HTTP, our main protocol for RPC, is good. After a while, though, we found ourselves implementing abstractions on top of it to do basic things for the concurrency and resilience requirements of our systems. If such abstractions were to be required, we would rather use something that exists than re-invent the wheel.

    We tried the Netflix stack, and a while back Joseph Wilk wrote about our experience with Hystrix and Clojure. Hystrix does very well in the resilience and concurrency requirements, but its API based on the Command pattern was a turnoff. In our experience, Hystrix commands do not compose very well unless you also use RxJava, and although we use this library for several back-end systems and our Android application, we decided that the reactive approach was not the best for all of our use cases.

    We then started trying out Finagle, a protocol-agnostic RPC system developed by Twitter and used by many companies our size. Finagle does very well in our three requirements, and its design is based on a familiar and extensible Pipes-and-Filters meets Futures model.

    The first issue we found with Finagle is that, as opposed to the other alternatives, it is written in Scala, therefore the language runtime jar file is required even for a Clojure or JRuby application. We decided that this wasn’t too important, though it adds about 5MB to the transitive dependencies' footprint, the language runtime is very stable and does not change often.

    The other big issue was to adapt the framework to our conventions. Twitter uses mostly Thrift for RPC; we use HTTP. They use ZooKeeper for Service Discovery; we use DNS. They use a Java properties-based configuration system; we use environment variables. They have their own telemetry system; we have our own telemetry system (we're not ready to show it just yet, but stay tuned for some exciting news there). Fortunately, Finagle has some very nice abstractions for these areas, and most of the issues were solved with very minimal changes and there was no need to patch the framework itself.

    We then had to deal with the very messy state of Futures in Scala. Heather Miller, from the Scala core team, explained the history and changes introduced by newer versions of the language in a great presentation. But in summary, what we have across the Scala ecosystem are several different implementations of Futures and Promises, with Finagle coupled to Twitter's Futures. Although Scala allows for compatibility between these implementations, we decided to use Twitter's everywhere, and invest time in helping the Finagle community move closer to the most recent versions of Scala rather than debug weird issues that this interoperability might spawn.

    With these issues addressed, we focused on how best to develop applications using Finagle. Luckly, Finagle’s design philosophy is nicely described by Marius Eriksen, one of its core contributors, in his paper Your Server as a Function. You don’t need to follow these principles in your userland code, but in our experience everything integrates much better if you do. Using a Functional programming language like Scala makes following these principles quite easy, as they map very well to pure functions and combinators.

    We have used Finagle for HTTP, Thrift, memcached, Redis, and MySQL. Every request to the SoundCloud platform is very likely hitting at least one of our Finagle-powered microservices, and the performance we have from these is quite amazing.

    In the last part of this series of blog posts, we will be talking about how Finagle and Scala are being used to move away from a one-size-fits-all RESTful API to optmized back-ends for our applications.

  • June 12th, 2014 Scala Finagle Ruby Architecture Building Products at SoundCloud—Part II: Breaking the Monolith By Phil Calçado

    In the previous post, we talked about how we enabled our teams to build microservices in Scala, Clojure, and JRuby without coupling them with our legacy monolithic Rails system. After the architecture changes were made, our teams were free to build their new features and enhancements in a much more flexible environment. An important question remained, though: how do we extract the features from the monolithic Rails application called Mothership?

    Splitting a legacy application is never easy, but luckily there are plenty of industry and academic publications to help you out.

    The first step in any activity like this is to identify and apply the criteria used to define the units to be extracted. At SoundCloud, we have decided to use the work of Eric Evans and Martin Fowler in what is called a Bounded Context. An obvious example of Bounded Context in our domain was user-to-user messages. This was a well-contained feature set, highly cohesive, and not too coupled with the rest of the domain, as it just needs to hold references to users.

    After we identified the Bounded Context, the next task was to find a way to extract it. Unfortunately, Rails’ ActiveRecord framework often leads to a very coupled design. The code dealing with such messages was as follows:

     def index
      if (InboxItem === item)
        respond mailbox_items_in_collection.index.paginate(:page => params[:page])
      else
        respond mailbox_items_in_collection.paginate(
          :joins => "INNER JOIN messages ON #{safe_collection}_items.message_id = messages.id",
          :page  => params[:page],
          :order => 'messages.created_at DESC')
      end
    end
    

    Because we wanted to extract the messages’ Bounded Context into a separate microservice, we needed the code above to be more flexible. The first step we took was to refactor this code into what Michael Feathers describes as a seam:

    A seam is a place where you can alter behavior in your program without editing in that place.

    So we changed our code a little bit:

    def index
      conversations = cursor_for do |cursor|
        conversations_service.conversations_for(
        current_user,
        cursor[:offset],
        cursor[:limit])
      end
    
      respond collection_for(conversations, :conversations)
    end
    

    The first version of the conversations_service#conversations_for method was not that different from the previous code; it performed the exact same ActiveRecord calls.

    We were ready to extract this logic into a microservice without having to refactor lots of controllers and other Presentation Layer code. We first replaced the implementation of conversations_service#conversations_for with a call to the service:

    def conversations_for(user, offset = 0, limit = 50)
      response = @http_client.do_get(service_path(user), pagination(offset, limit))
      parse_response(user, response)
    end
    

    We avoided big-bang refactorings as much as we could, and this required us to have the microservices working together with the old Mothership code for as long as it took to completely extract the logic into the new microservice.

    As described before, we did not want to use the Mothership’s database as the integration point for microservices. That database is an Application Database, and making it an Integration Database would cause problems because we would have to synchronize any change in the database across many different microservices that would now be coupled to it.

    Although using the database as the integration point between systems was not planned, we had the new microservices accessing the Mothership’s database during the transition period.

    This brought up two important issues. During the whole transition period, the new microservices could not change the relational model in MySQL—or, even worse, use a different storage engine. For extreme cases, like user-to-user messages where a threaded-based model was replaced by a chat-like one, we had cronjobs keep different databases synchronized.

    The other issue was related to the Semantic Events system described in Part I. The way our architecture and infrastructure was designed requires events to be emitted where the state change happened, and this ought to be a single system. Because we could not have both the Mothership and the new microservice emitting events, we had to implement only the read-path endpoints until we were ready to make the full switch from the Mothership to the new microservice. This was less problematic than what we first thought, but nevertheless it did impact product prioritization because features to be delivered were constrained by this strategy.

    By applying these principles we were able to extract most services from the Mothership. Currently we have only the most coupled part of our domain there, and products like the new user-to-user messaging system were built completely decoupled from the monolith.

    In the next part, we will look at how we use Scala and Finagle to build our microservices.

  • June 11th, 2014 Scala Finagle Ruby Architecture Building Products at SoundCloud —Part I: Dealing with the Monolith By Phil Calçado

    Most of SoundCloud's products are written in Scala, Clojure, or JRuby. This wasn't always the case. Like other start-ups, SoundCloud was created as a single, monolithic Ruby on Rails application running on the MRI, Ruby's official interpreter, and backed by memcached and MySQL.

    We affectionately call this system Mothership. Its architecture was a good solution for a new product used by several hundreds of thousands of artists to share their work, collaborate on tracks, and be discovered by the industry.

    The Rails codebase contained both our Public API, used by thousands of third-party applications, and the user-facing web application. With the launch of the Next SoundCloud in 2012, our interface to the world became mostly the Public API —we built all of our client applications on top of the same API partners and developers used.

    Diagram 1

    These days, we have about 12 hours of music and sound uploaded every minute, and hundreds of millions of people use the platform every day. SoundCloud combines the challenges of scaling both a very large social network with a media distribution powerhouse.

    To scale our Rails application to this level, we developed, contributed to, and published several components and tools to help run database migrations at scale, be smarter about how Rails accesses databases, process a huge number of messages, and more. In the end we have decided to fundamentally change the way we build products, as we felt we were always patching the system and not resolving the fundamental scalability problem.

    The first change was in our architecture. We decided to move towards what is now known as a microservices architecture. In this style, engineers separate domain logic into very small components. These components expose a well-defined API, and implement a Bounded Context —including its persistence layer and any other infrastructure needs.

    Big-bang refactoring has bitten us in the past, so the team decided that the best approach to deal with the architecture changes would not be to split the Mothership immediately, but rather to not add anything new to it. All of our new features were built as microservices, and whenever a larger refactoring of a feature in the Mothership was required, we extract the code as part of this effort.

    This started out very well, but soon enough we detected a problem. Because so much of our logic was still in the Rails monolith, pretty much all of our microservices had to talk to it somehow.

    One option around this problem was to have the microservices accessing directly the Mothership database. This is a very common approach in some corporate settings, but because this database is a Public, but not Published Interface, it usually leads to many problems when we need to change the structure of shared tables.

    Instead, we went for the only Published Interface we had, which was the Public API. Our internal microservices would behave exactly like the applications developed by third-party organizations integrate with the SoundCloud platform.

    Diagram 2

    Soon enough, we realized that there was a big problem with this model; as our microservices needed to react to user activity. The push-notifications system, for example, needed to know whenever a track had received a new comment so that it could inform the artist about it. At our scale, polling was not an option. We needed to create a better model.

    We were already using AMQP in general and RabbitMQ in specific — In a Rails application you often need a way to dispatch slow jobs to a worker process to avoid hogging the concurrency-weak Ruby interpreter. Sebastian Ohm and Tomás Senart presented the details of how we use AMQP, but over several iterations we developed a model called Semantic Events, where changes in the domain objects result in a message being dispatched to a broker and consumed by whichever microservice finds the message interesting.

    Diagram 3

    This architecture enabled Event Sourcing, which is how many of our microservices deal with shared data, but it did not remove the need to query the Public API —for example, you might need all fans of an artist and their email addresses to notify them about a new track.

    While most of the data was available through the Public API, we were constrained by the same rules we enforced on third-party applications. It was not possible, for example, for a microservice to notify users about activity on private tracks as users could only access public information.

    We explored several possible solutions to the problem. One of the most popular alternatives was to extract all of the ActiveRecord models from the Mothership into a Ruby gem, effectively making the Rails model classes a Published Interface and a shared component. There were several important issues with this approach, including the overhead of versioning the component across so many microservices, and that it became clear that microservices would be implemented in languages other than Ruby. Therefore, we had to think about a different solution.

    In the end, the team decided to use Rails' features of engines (or plugins, depending on the framework's version) to create an Internal API that is available only within our private network. To control what could be accessed internally, we used Oauth 2.0 when an application is acting on behalf of a user, with different authorisation scopes depending on which microservice needs the data.

    Diagram 4

    Although we are constantly removing features from the Mothership, having both a push and pull interface to the old system makes sure that we do not couple our new microservices to the old architecture. The microservice architecture has proven itself crucial to developing production-ready features with much shorter feedback cycles. External-facing examples are the visual sounds, and the new stats system.

  • September 20th, 2013 Architecture Clojure Building Clojure Services at Scale By Duana Stanley

    SoundCloud has a service-oriented architecture, which allows us to use different languages for different services. With concurrency and scaling in mind, we started to build some services in Clojure due to its interoperability with the JVM, the availability of good quality libraries, and we just plain like it as a language.

    How do you build distributed, robust, and scalable micro-services in Clojure? Read what Joseph Wilk, an engineer and Clojure enthusiast at SoundCloud, has to say.

  • December 4th, 2012 Search and Discovery Architecture Architecture behind our new Search and Explore experience By Petar Djekic

    Search is front-and-center in the new SoundCloud, key to the consumer experience. We’ve made the search box one of the first things you see, and beefed it up with suggestions that allow you to jump directly to people, sounds, groups, and sets of interest. We’ve also added a brand-new Explore section that guides you through the huge and dynamic landscape of sounds on SoundCloud. We’ve also completely overhauled our search infrastructure, which helps us provide more relevant results, scale with ease, and experiment quickly with new features and models.

    In the beginning

    In SoundCloud’s startup days, when we were growing our product at super speed, we spent just two days implementing a straightforward search feature, integrating it directly into our main app. We used Apache Solr, as was the fashion at the time, and built a master-slave cluster with the common semantics: writes to the master, reads from the slaves. Besides a separate component for processing deletes, our indexing logic was pull-based: the master Solr instance used a Solr DataImportHandler to poll our database for changes, and the slaves polled the master.

    At first, this worked well. But as time went on, our data grew, our entities became more complex, and our business rules multiplied. We began to see problems.

    The problems

    • The index was only getting updated on the read slaves about every fifteen minutes. Being real-time is crucial in the world of sound, as it’s important that our users’ sounds are searchable immediately after upload.
    • A complete re-index of the master node from the database took up to 24 hours, which made making a simple schema change for a new feature or bugfix a Sisyphean task.
    • Worse, during those complete-reindex periods, we couldn’t do incremental indexing because of limitations in the DataImportHandler. That meant the search index was essentially frozen in time: not a great user experience.
    • Because the search infrastructure was tightly coupled in the main application, low-level Solr or Lucene knowledge leaked everywhere. If we wanted to make a slight change, such as to a filter, we had to touch 25 parts of the codebase in the main application.
    • Because the application directly interacted with the master, if that single-point-of-failure failed, we lost information about updates, and our only reliable recovery was a complete reindex, which took 24 hours.
    • Since Solr uses segment or bulk replication, our network was flooded when the slaves pulled updates from the master, which caused additional operational issues.
    • Solr felt like it was built in a different age, for a different use-case. When we had performance problems, it was opaque and difficult to diagnose.

    In early 2012, we knew we couldn’t go forward with many new features and enhancements on the existing infrastructure. We formed a new team and decided we should rebuild the infrastructure from scratch, learning from past lessons. In that sense, the Solr setup was an asset: it could continue to serve site traffic, while we were free to build and experiment with a parallel universe of search. Green-field development, unburdened by legacy constraints: every engineer’s dream.

    The goals

    We had a few explicit goals with the new infrastructure.

    * Velocity: we want high velocity when working on schema-affecting bugs and features, so a complete reindex should take on the order of an hour. * Reliability: we want to grow to the next order of magnitude and beyond, so scaling the infrastructure—horizontally and vertically—should require minimum effort. * Maintainability: we don’t want to leak implementation details beyond our borders, so we should strictly control access through APIs that we design and control.

    After a survey of the state of the art in search, we decided to abandon Solr in favor of ElasticSearch, which would theoretically address our reliability concerns in their entirety. We then just had to address our velocity and maintenance concerns with our integration with the rest of the SoundCloud universe. To the whiteboard!

    The plan

    On the read side, everything was pretty straightforward. We’d already built a simple API service between Solr and the main application, but had hit some roadblocks when integrating it. We pivoted and rewrote the backend of that API to make ElasticSearch queries, instead: an advantage of a lightweight service-oriented architecture is that refactoring services in this way is fast and satisfying.

    On the write side—conceptually, at least—we had a straightforward task. Search is relatively decoupled from other pieces of business and application logic. To maintain a consistent index, we only need an authoritative source of events, ie. entity creates, updates, and deletes, and some method of enriching those events with their searchable metadata. We had that in the form of our message broker. Every event that search cared about was being broadcast on a well-defined set of exchanges: all we had to do was listen in.

    But depending on broker events to materialize a search index is dangerous. We would be susceptible to schema changes: the producers of those events could change the message content, and thereby break search without knowing, potentially in very subtle ways. The simplest, most robust solution was to ignore the content of the messages, and perform our own hydration. That carries a runtime cost, but it would let us better exercise control over our domain of responsibility. We believed the benefits would outweigh the costs.

    Using the broker as our event source, and the database as our ground truth, we sketched an ingestion pipeline for the common case. Soon, we realized we had to deal with the problem of synchronization: what should we do if an event hadn’t yet been propagated to the database slave we were querying? It turned out that if we used the timestamp of the event as the “external” version of the document in ElasticSearch, we could lean on ElasticSearch’s consistency guarantees to detect, and resolve, problems of sync.

    Once it hit the whiteboard, it became clear we could re-use the common-case pipeline for the bulk-index workflow; we just had to synthesize creation events for every entity in the database. But could we make it fast enough? It seemed at least possible. The final box-diagram looked like this:

    Implementation and optimization began. Over a couple of weeks, we iterated over several designs for the indexing service. We eked out a bit more performance in each round, and in the end, we got where we wanted: indexing our entire corpus against a live cluster took around 2 hours, we had no coupling in the application, and ElasticSearch itself gave us a path for scaling.

    The benefits

    We rolled out the new infrastructure in a dark launch on our beta site. Positive feedback on the time-to-searchability was immediate: newly-posted sounds were discoverable in about 3 seconds (post some sounds and try it out for yourself!). But the true tests came when we started implementing features that had been sitting in our backlog. We would start support for a new feature in the morning, make schema changes into a new index over lunch, and run A/B tests in the afternoon. If all lights were green, we could make an immediate live swap. And if we had any problems, we could rollback to the previous index just as fast.

    There’s no onerous, manual work involved in bootstrapping or maintaining nodes: everything is set up with ElasticSearch’s REST API. Our index definitions are specified in flat JSON files. And we have a single dashboard with all the metrics we need to know how search is performing, both in terms of load and search quality.

    In short, the new infrastructure has been an unqualified success. Our velocity is through the roof, we have a clear path for orders-of-magnitude growth, and we have a stable platform for a long time to come.

    Enter DiscoRank

    It’s not the number of degrees of separation between you and John Travolta: DiscoRank is our modification of the PageRank algorithm to provide more relevant search results: short for Discovery Rank. We match your search against our corpus and use DiscoRank to rank the results.

    The DiscoRank algorithm involves creating a graph of searchable entities, in which each activity on SoundCloud is an edge connecting two nodes. It then iteratively computes the ranking of each node using the weighted graph of entities and activities. Our graph has millions and millions of nodes and a lot more edges. So, we did a lot of optimizations to adapt PageRank to our use case, to be able to keep the graph in memory, and to recalculate the DiscoRank quickly, using results from previous runs of the algorithm as priors. We keep versioned copies of the DiscoRank so that we can swap between them when testing things out.

    How do we know we’re doing better?

    Evaluating the relevance of search results is a challenge. You need to know what people are looking for (which is not always apparent from their query) and you need to get a sample that’s representative enough to judge improvement Before we started work on the new infrastructure, we did research and user testing to better understand how and why people were using search on SoundCloud.

    Our evaluation baseline is a set of manually-tagged queries. To gather the first iteration of that data, we built an internal evaluation framework for SoundCloud employees: thumbs up, thumbs down on results. In addition, we have metrics like click positions, click engagement, precision, and recall, all constantly updating in our live dashboard. This allows us to compare ranking algorithms and other changes to the way we construct and execute queries. And we have mountains of click log data that we need to analyse.

    We have some good improvements so far, but there’s still a lot of tuning that can be done.

    Search as navigation

    The new Suggest feature lets you jump straight to the sound, profile, group, or set you’re looking for. We’ll trigger your memory if you don’t remember how something is spelled, or exactly what it’s called. Since we know the types of the results returned, we can send you straight to the source. This ability to make assumptions and customizations is a consequence of knowing the structure and semantics of our data, and gives a huge advantage in applications like this.

    Architecture-wise, we decided to separate the suggest infrastructure from the main search cluster, and build a custom engine based on Apache Lucene’s Finite State Transducers. As ever, delivering results fast is crucial. Suggestions need to show up right after the keystroke. It turned out we are competing with the speed of light for this particular use-case.

    The fact that ElasticSearch doesn’t come with a suggest engine turned out to be a non-issue and rather forced us to build this feature in isolation. This separation proved to be a wise decision, since update-rate, request patterns and customizations are totally different, and would have made a built-in solution hard to maintain.

    Search as a crystal ball

    Similar to suggest, Explore is based on our new search infrastructure, but with a twist: our main goal for Explore is to showcase the sounds in our community at this very moment. This led to a time-sensitive ranking, putting more emphasis on the newness of a sound.

    So far, so good

    We hope you enjoy using the new search features of SoundCloud as much as we enjoyed building them. Staying focused on a re-architecture is painful when you see your existing infrastructure failing more and more each day, but we haven’t looked back—especially since our new infrastructure allows us to deliver a better user experience much faster. We now can roll out new functionality with ease, and the new Suggest and Explore along with an improved Search itself are just the first results.

    Keep an eye out for more improvements to come!

    Apache Lucene and Solr are trademarks of the Apache Software Foundation