SoundCloud for Developers

Discover, connect and build

We use cookies for various purposes including analytics and personalized marketing. By continuing to use the service, you agree to our use of cookies as described in the Cookie Policy.

Backstage Blog RSS

You're browsing posts of the category JavaScript

  • January 24th, 2019 Engineering JavaScript Redux React Garbage Collection in Redux Applications By Jan Monschke

    This post describes why and how we implemented a garbage collector in our Xbox application on top of Redux and in addition to the JavaScript engine’s regular garbage collector.


  • October 1st, 2015 Announcements API JavaScript SDKs Introducing SoundCloud JavaScript SDK 3.0.0 By Jan Monschke

    We are happy to announce version 3.0.0 of our SoundCloud JavaScript SDK.

    The new SDK improves stream security and content uploading functionality, and modernizes the technology stack.

    Version 3 of the SoundCloud JavaScript SDK is a major update and is not backwards compatible. That said, the changes that you need to make your web app work with the new SDK are easy to implement. Please refer to Migrating to JavaScript SDK 3.0.0 to upgrade.

    ECMAScript 2015 and CommonJS

    The original version of the SDK was written in CoffeeScript, which is no longer a core technology at SoundCloud. This update provided us the opportunity to migrate our source code from CoffeeScript to ECMAScript 2015.

    The new SDK is now using the Babel compiler for ES2015 support, and webpack as our bundler. This has the additional benefit of making the SoundCloud JavaScript SDK 3.0.0 compliant with CommonJS.

    Because of this, we can now take advantage of the variety of packages that are available via npm, and users can install the SDK via npm as well. Please refer to the npm page for details.

    JavaScript Promises

    Promises have become a core part of JavaScript with the new ES2015 specification, and they allow for better composability and a easier control flow. Internally, we work with Promises a lot and wanted to give external developers the ability to benefit from an easier API.

    Web Audio API

    The previous SDK neither provided a way to record sounds from Web Audio applications nor a way to upload Web Audio recordings. It shipped with a Flash component that handled recording and uploading of recordings. There was no way to specify an external file for uploading.

    The new SDK ships with a recorder component that uses Web Audio and getUserMedia instead of Flash, and the component allows you to pass in arbitrary AudioNodes that will be recorded. This makes it much easier to integrate the SoundCloud JavaScript SDK 3.0.0 into creator applications that rely on Web Audio. The SDK provides a dedicated method to publish recordings directly from your web app.

    Secure streaming

    The new SDK now includes a new player component. This component improves security for creator content and provides the improved playback stability.


    We also took the time to all of rewrite our documentation and code examples so that you can start with the new SDK immediately.

  • May 1st, 2014 Announcements JavaScript SDKs Introducing JavaScript SDK version 2 By Erik Michaels-Ober

    SoundCloud is pleased to introduce a new major version of the SoundCloud JavaScript SDK. In version 2, we've rewritten much of the internal code, resulting in better performance for your JavaScript applications and support for more streaming standards, such as HTTP Live Streaming.

    You can test the new version by pointing your JavaScript applications to

    We've also created a guide to help you upgrade from version 1 to version 2.

    JavaScript SDK version 1 is now deprecated and will be permanently replaced by version 2 on July 1, 2014.

    On June 17, 2014, we will temporarily replace version 1 with version 2 between 10:00 and 11:00 UTC. We will do this again on June 24, 2014, between 18:00 and 19:00 UTC. These two upgrade tests will give you an opportunity to understand the impact of this change on your applications. To ensure a seamless transition for your users, we strongly encourage you to upgrade and perform internal tests in advance of these dates.

    To receive notices before, during, and after these tests, follow @SoundCloudDev on Twitter.

    If you have any questions about this upgrade, please feel free to email

  • February 20th, 2014 JavaScript Smooth image loading by upscaling By Nick Fisher

    The site is a single-page application that displays a multitude of users’ images. At SoundCloud, we use a technique to make the loading of an image appear smooth and fast. When displaying an image on screen, we want it to display to the user as fast as possible. The images display in multiple locations from a tiny avatar on a waveform to a large profile image. For this reason, we create each image in several sizes. If you are using Gravatar, this technique also applies because you can fetch arbitrarily-sized images by passing the desired size in a query parameter (?s=).

    avatar-tiny avatar-small avatar-medium avatar-large

    The technique uses the browser’s cache of previously loaded images. When displaying a large avatar image, first display a smaller version that is stretched out to full size. When the larger image has loaded, it fades in over the top of the smaller version.

    The HTML looks like this:

    <img class="placeholder" src="small.jpg" width="200" height="200">
    <img class="fullImage" src="large.jpg" width="200" height="200">

    The CSS looks like this:

    .fullImage {
      transition: opacity 0.2s linear;

    For the sake of brevity, the positioning code is not included in the preceding snippet. However, the images should lie atop one another.

    Finally, the JavaScript code looks like this:

    var fullImage   = $('.fullImage'),
        placeholder = $('.placeholder');
      .css('opacity', 0)
      .on('load', function () { = 1;
        setTimeout(placeholder.remove.bind(placeholder), 500);

    Thus far, it’s not too complicated, and it gives a nice effect to the loading of images.

    But there’s a problem: we don’t want to make a request to get the small image just to display it for a few milliseconds. The overhead of making HTTP requests means that loading the larger image will usually not take significantly longer than the small one. Therefore, it only makes sense to use this technique if a smaller image has already been loaded during a particular session and thus served from the browser’s cache. How do we know what images are in cache? Each time an avatar is loaded, we need to keep track of that. However over time, there could be many thousands of avatars loaded within one session, so it needs to be memory efficient. Instead of tracking the full URLs of loaded images, we extract the minimum amount of information to identify a image, and use a bitmask to store what sizes have been loaded:

    // a simple map object, { identifier => loaded sizes }
    var loadedImages = {},
      // Let's assume a basic url structure like this:
      // "{identifier}-{size}.jpg"
      imageRegex = /\/(\w+)-(\w+)\.jpg$/,
      // a list of the available sizes.
      // format is [pixel size, filename representation]
      sizes = [
        [ 20, "tiny"  ],
        [ 40, "small" ],
        [100, "medium"],
        [200, "large" ]
    // extract the identifier and size.
    function storeInfo(url) {
      var parts = imageRegex.exec(url),
          id    = parts[1]
          size  = parts[2],
      // find the index which contains this size
      sizes.some(function (info, index) {
        if (info[1] === size) {
          loadedImages[id] |= 1 << index;
          return true;
    // once the image has loaded, then store it into the map
    $('.fullImage').load(function () {

    When the image loads, we extract the important parts from the URL: namely, the identifier and the size modifier. Each size is mapped to a number—its index in the sizes array—and the appropriate bit is turned on in the loadedImages map. The code on line 27 does this conversion and bit manipulation; 1 << index is essentially the same as Math.pow(2, index). By storing only a single number in the object, we save quite a bit of memory. A single-number object could contain many different flags. For example, assume there are four different sizes and 10,000 images in the following map:

    asBools = {
      a: [true, true, false, true],
      b: [false, true, false, false],
      // etc...
    asInts = {
      a: 11,  // 2^0 + 2^1 + 2^3 = 1 + 2 + 8
      b: 2,   // 2^1
      // etc...

    The memory footprints of these objects differ by 30%: 1,372,432 bytes for the booleans, and 1,052,384 for the integers. The key names consume the largest portion of these objects’ memory. For this reason, it is important to compress the key names as much as possible. Numeric keys are stored particularly efficiently by the V8 JavaScript engine found in Chrome and Safari.

    We now have a map that shows us what images have been loaded during this session, and you can use that information to choose a placeholder:

    // find the largest image smaller than the requested one
    function getPlaceholder(fullUrl) {
      var parts = imageRegex.exec(fullUrl),
          id = parts[1],
          targetSize = parts[2],
      sizes.some(function (info, index) {
        if (info[1] < targetSize) {
          targetIndex = index;
          return true;
      while (targetIndex >= 0) {
        if (loadedImages[id] & 1 << targetIndex) {
          return fullUrl.replace(
            sizes[targetIndex][1] + '.jpg'
    // and in usage:
    var placeholderUrl = getPlaceholder(fullSizeUrl);
    if (placeholderUrl) {
      // there has been a smaller image loaded previously, so...
    } else {
      // no smaller image has been loaded so...

    Although, this technique is a bit involved and I’ve deliberately glossed over some of the details, it creates a nice visual effect, greatly reduces the perceived load time of the image, and it is especially effective for long-lived, single-page applications.

  • September 9th, 2013 JavaScript Writing your own Karma adapter By Misha Reyzlin


    When we started to work on the new version of our mobile web app, we knew we wanted to run unit tests on a wide variety of clients, mobile devices, PhantomJS, and on Chrome when running locally. Because we practice continuous integration, we knew we also wanted Git hooks and proper results formatting.

    We chose Karma runner, which is a project from the Angular JS team that provides developers with a “productive testing environment”. One of the advantages that Karma runner offers over other similar projects is its ability to use any testing framework. At SoundCloud, we aim to have the same toolset across various JavaScript projects, and our unit test framework of choice is Tyrtle.

    Using Tyrtle

    You can write your own Karma adapter by using the Tyrtle example that follows. The idea is to tie your tests to the Karma API. The pieces of information that you need are the number of tests, test suites or modules, the results of each test (with possible assertion or execution errors, or both), and a hook to let Karma know that the runner ran all of the tests.

    You also need to provide a start function that configures the unit test framework, loads the test files, and starts the tests.

    The basic template for an adapter is as follows:

    (function (win) {
     * Returned function is used to kick off tests
    function createStartFn(karma) {
      return function () {
     * Returned function is used for logging by Karma
    function createDumpFn(karma, serialize) {
      // inside you could use a custom `serialize` function
      // to modify or attach messages or hook into logging
      return function () {{ dump: [] });
    win.__karma__.start = createStartFn(window.__karma__);
    win.dump = createDumpFn(win.__karma__, function (value) {
      return value;

    Next, create a renderer/reporter for the unit test framework that will pass the data to Karma. Tyrtle has a renderer that can render HTML, XML for CI, or print to any other type of output.

    To pass the data to Karma, implement the methods that follow:

     * Tyrtle renderer
     * @interface
    function Renderer () {}
    Renderer.prototype.beforeRun  = function (tyrtle) {};
    Renderer.prototype.afterRun   = function (tyrtle) {};
    Renderer.prototype.afterTest  = function (test, module) {};

    The createStartFn function creates a renderer object, with a Karma runner instance available within the start-function’s scope.

    Create a parameter named karma:

    function TyrtleKarmaRenderer (karma) {
      this.karma = karma;

    Tell karma what the total number of tests is:

     * Invoked before all tests are run; reports complete number of tests
     * @param  {Object} tyrtle Instance of Tyrtle unit tests runner
    TyrtleKarmaRenderer.prototype.beforeRun = function (tyrtle) {{
        // count number of tests in each of the modules
        total: tyrtle.modules.reduce(function(memo, currentModule) {
          return memo + currentModule.tests.length;
        }, 0)

    After each test, pass the resulting data to Karma:

     * Invoked after each test, used to provide Karma with feedback
     * for each of the tests
     * @param  {Object} test current test object
     * @param  {Object} module instance of Tyrtle module
     *                  to which this test belongs
    TyrtleKarmaRenderer.prototype.afterTest = function (test, module) {
        suite: [ + "#"] || [],
        success: test.status === Tyrtle.PASS,
        log: [test.statusMessage] || [],
        time: test.runTime

    Next, inform Karma that the tests have all finished running:

     * Invoked after all the tests are finished running
     * with unit tests runner as a first parameter.
     * `window.__coverage__` is provided by Karma.
     * This function notifies Karma that the unit tests runner is done.
    TyrtleKarmaRenderer.prototype.afterRun = function (/* tyrtle */) {
        coverage: window.__coverage__

    You now have a renderer constructor. Next, turn your attention to the createStartFn function. You need to configure and initialize the unit test framework that returns a function, which requires a list of test files that are served from the Karma server and starts the actual runner.

    Karma serves the files that are required for testing from a path that Karma creates and timestamps the files to avoid caching issues in browsers. Karma makes each path available as a key in a hash named __karma__.files. This makes Karma a bit tricky to configure when using an AMD-loader such as require.js. To understand how to use AMD with Karma, go to:

    Here is the final createStartFn function:

     * Creates instance of Tyrtle to run the tests.
     * Returned start function is invoked by Karma runner when Karma is
     * ready (connected with a browser and loaded all the required files)
     * When invoked, the start function will AMD require the list of test
     * files (saved by Karma in window.__karma__.files) and set them
     * as test modules for Tyrtle and then invoke Tyrtle runner to kick
     * off tests.
     * @param  {Object} karma Karma runner instance
     * @return {Function}     start function
    function createStartFn(karma) {
      var runner = new Tyrtle({});
      Tyrtle.setRenderer(new TyrtleKarmaRenderer(karma));
      return function () {
        var testFiles = Object.keys(window.__karma__.files)
          .filter(function (file) {
            return (/-test\.js$/).test(file);
          .map(function (testFile) {
            return testFile.replace('/base/public/', '').replace('.js', '');
        require(testFiles, function (testModules) {
          // test files can return a single module, or an array of them.
          testFiles.forEach(function (testFile) {
            var testModule = require(testFile);
            if (!Array.isArray(testModule)) {
              testModule = [testModule];
            testModule.forEach(function (aModule, index) {
              aModule.setAMDName(testFile, index);

    To find more examples of how this all fits together, see the scripts test-main.js (the RequireJS configuration to work with Karma) and karma.conf.js. Also, there are many adapter implementations such as Mocha, NodeUnit, and QUnit on the Karma GitHub page.

    Ursula Kallio contributed to the writing of this post.

  • June 14th, 2012 JavaScript Building The Next SoundCloud By Nick Fisher

    This article is also available in:

    HTML5 widgetThe front-end team at SoundCloud has been building upon our experiences with the HTML5 widget to make the recently-released Next SoundCloud beta as solid as possible. Part of any learning also includes sharing your experiences, so here we outline the front-end architecture of the new site.

    Building a single-page application

    One of the core features of Next SoundCloud is continuous playback, which allows users to start listening to a sound and continue exploring without ever breaking the experience. Since this really encourages lots of navigation around the site, we also wanted to make that as fast and smooth as possible. These two factors were enough in themselves for us to decide to build Next as a single-page Javascript application. Data is drawn from our public API, but all rendering and navigation happens in the browser for near-instant navigation without the need to make round-trip requests to the server.

    As a basis for this style of application, we have used the massively popular Backbone.js. What attracted us to Backbone (apart from the fact that we’re already using it for our Mobile site and the Widget), is that it doesn’t prescribe too much about how it should be used. Backbone still provides a solid basis for working with views, data models and collections, but leaves lots of questions unanswered, and this is where its flexibility and strength really lies.

    For rendering the views on the front end, we use the Handlebars templating system. We evaluated several other templating engines, but settled on Handlebars for a few reasons:

    • No logic is performed inside the templates, which enforces good separation of concerns.
    • It can be precompiled before deployment which results both in faster rendering and a smaller payload that needs to be sent to clients (the runtime library is only 3.3kb even before gzip).
    • It allows for custom helpers to be defined.

    Modular code

    One technique we used with the Widget which ended up being a great success was to write our code in modules and declare all dependencies explicitly.

    When we write code, we write in CommonJS-style modules which are converted to AMD modules when they’re executed in the browser. There are some reasons we decided to have this conversion step, possibly best explained by seeing what each style looks like:

    // CommonJS module ////////////////
    var View = require('lib/view'),
        Sound = require('models/sound'),
    MyView = module.exports = View.extend({
        // ...
    // Equivalent AMD module //////////
    define(['require', 'exports', 'module', 'lib/view', 'models/sound'],
      function () {
        var View = require('lib/view'),
          Sound = require('models/sound'),
      MyView = module.exports = View.extend({
          // ...
    • The extra define boilerplate is tedious to write
    • Duplication of module dependencies is also tedious and error-prone
    • Conversion from CommonJS to AMD is easily automated, so why not?

    During local development, we convert to AMD modules on-the-fly and use RequireJS to load them individually. This makes development quite frictionless as we can just save and refresh to see the updates, however it’s not so great for production since this method creates hundreds of HTTP requests. Instead, the modules are concatenated into several packages, and we drop RequireJS for the super-lightweight module loader AlmondJS.

    CSS and Templates as dependencies

    Since we’re already including all of our code by defining explicit dependencies, we thought it made sense to also include the CSS and template for a view in the same way. Doing this for templates was rather straight-forward since Handlebars compiles the templates into a Javascript function anyway. For the CSS, it was a new paradigm:

    Views define which CSS files are needed for it to display properly, just the same as you would define the javascript modules you need to execute. Only when the view is rendered do we insert its CSS to the page. Of course, there are some common global styles, but mostly, each view has its own small CSS file which just defines the styles for that view.

    When writing the code, we write in plain vanilla CSS (without help of preprocessors such as SCSS or LESS), but since they are being included by Require/Almond they need to be converted into modules as well. This is done with a build step which wraps the styles into a function which returns a <style> DOM element. Here’s an example of how it looks in essence:

    Input is plain CSS

    .myView {
      padding: 5px;
      color: #f0f;
    .myView__foo {
      border: 1px solid #0f0;

    Result is an AMD module

    define("views/myView.css", [...], function (...) {
      var style = module.exports = document.createElement('style');
        document.createTextNode('.myView { padding: 5px; color ... }');

    Views as components

    A central concept in developing Next is that of treating views as independent and reusable components. Each view can include other ‘sub’ views, which can themselves include subviews and so on. The effect of this is that some views are merely composites of other views and cover an entire page, whereas others can be as small as a button, or even a label in some cases.

    Keeping these views independent is very important. Each view is responsible for its own setup, events, data, and clean up. Views are ‘banned’ from modifying the behaviour or appearance of their subviews, or even making assumptions about how or where this view itself is being included. By removing these external factors, it means that each view can be included in any context with absolute minimum fuss, and we can be sure that it will work as it is supposed to.

    As an example, the ‘play’ button on Next is a view. To include one anywhere on the site, all that we need to do is create an instance of the button, and tell it the id of the sound it should play. Everything else is handled internally by the button itself.

    To actually create these subviews, most of the time they are created inside the template of the parent view. This is done by use of a custom Handlebars helper. Here is a snippet from a template which uses the view helper:

    <div class="listenNetwork__creator"> {{view "views/user/user-badge"}}

    As you can see, adding a subview is as simple as specifying the module name of the view and passing some minimal instantiation variables. What actually happens behind the scenes goes like this:

    When a view is rendered, the template must return a string. When the view helper is invoked, it pushes the attributes passed to it, plus a reference to the requested view class, into a temporary object with an id, and outputs a placeholder element (we use <view data-id="xxxx">). The id is just a unique, incrementing number. After a template is rendered, the output would be a string which might look something like:

    <div class="foo">
        <view data-id="123"></view>

    Then we find the placeholders and replace those elements with the subview’s element which it automatically creates for itself. In essence, the code does this:

    parentView.$('view').each(function () {
      var id = this.getAttribute('data-id'),
            attrs = theTemporaryObject[id],
          SubView = attrs.ViewClass,
          subView = new SubView(attrs);
      subView.render(); // repeat the process again

    Sharing Models between Views

    So, we now have a system where there will be dozens of views on the screen at one time, many of which will be views of the same model. Take, for example, a “listen” page:

    There would be a view for the play button, the title of the sound, the waveform, the time since the sound was uploaded (this dynamically updates itself, which is why it is a view), and so on. Each of these views are of the same sound model, but we wouldn’t want each to duplicate the data. Instead we need to find a way to share the model.

    Also remember that each of these views has to handle the case where there is no data yet. Almost all views are instantiated only with the id of its model, so it’s quite possible that the data for that model hasn’t been loaded yet.

    To solve this, we use a construct we call the instance store. This store is an object which is implicitly accessed and modified each time a constructor for a model is called. When a model is constructed for the first time, it injects itself into the store, using its id as a unique key. If the same model constructor is called with the same id, then the original instance is returned.

    var s1 = new Sound({id: 123}),
        s2 = new Sound({id: 123});
    s1 === s2; // true, these are the exact same object.

    This works because of a surprisingly little-known feature of Javascript. If a constructor returns an object, then that is the value assigned. Therefore, if we return a reference to the instance created earlier, we get the desired behaviour. Behind the scenes, the constructor is basically doing this:

    var store = {};
    function Sound(attributes) {
        var id =;
        // check if this model has already been created
        if (store[id]) {
            // if yes, return that
            return store[id];
        // otherwise, store this instance
        store[id] = this;

    This is not a particularly new pattern: it’s simply just the Factory Method Pattern wrapped up into the constructor. It could have been written as Sound.create({id: 123}), but since Javascript gives us this expressive ability, it makes sense to use it.

    So, this feature means that it’s completely simple for views to share the same instance of a model without knowing anything about the other views, simply by calling the constructor with a single id. We can then use this shared instance as a localised ‘event bus’ to facilitate communication and synchronisation of the views. Usually this is in the form of listening to changes to the model’s data. If the views subscribe to the ‘change’ events which affect it, then they will be notified immediately upon change and the page can be kept up-to-date with very little effort required by the developer.

    This also is how we solve the issue of there being no data on the model. On the first pass, several views might have a reference to a model which only contains an id and no other attributes. When the first view is rendered, it can detect that the view does not have enough information and so it would ask the model to fetch its data from the API. The model keeps track of this request, so that when the other views also ask it to fetch, we do nothing and avoid duplicate requests. When the data comes back from the server, the attributes of the model will be updated, causing ‘change’ events which then notify all the views.

    Making full use of data

    A common feature of many APIs is that when one particular resource is requested, other related resources are included in the response. For example, on SoundCloud, when you request information about a sound, included in the response is a representation of the user who created that sound.

    /* */
      "id": 49931,
      "title": "Hobnotropic",
      "user": {
        "id": 1433,
        "permalink": "matas",
        "username": "matas",
        "avatar_url": "http://i1.soundc..."

    Rather than let this extra data go to waste, each model is aware of which ‘sub-resources’ it can expect in its responses. These sub-resources are inserted into the instance store in case any views need to use the data. This means we can save a lot of extra trips to the API and display the views much faster.

    So, for our example above, the Sound model would know that in its property “user” it has a representation of a User model. When that data is fetched, then two models are created and populated on the client side:

    var sound = new Sound({id: 49931 });
      .fetch()             // get the data
      .done(function () {  // and when it's done
        var user = new User({id: });
        user.get('username'); // 'matas' -- we already have the model data

    What’s important to remember is that because there’s only ever one instance of each model, even pre-existing instances are updated. Here’s the same example from above, but note when the User is created.

    var sound = new Sound({id: 49931 }),
        user = new User({id: 1433 });
    user.get('username'); // undefined -- we haven't fetched anything yet.
      .done(function () {
        user.get('username'); // 'matas' -- the previous instance is updated

    Letting go

    Holding on to every instance of every model forever isn’t feasible, especially for the Next SoundCloud. Because of the nature of the site, it’s quite possible that a user might go for several hours without ever performing a page load. During this time, the memory consumption of the application would just continue to grow and grow. Therefore, at some point we need to flush some models out of the instance store. To decide when it is safe to do this, the instance store increments a usage counter each time an instance has been requested, and views can ‘release’ a model when it is no longer needed and the count is decremented.

    Periodically, we check the store to see if there are any models with a count of zero, and they’re purged from the store, allowing the browser’s garbage collector to free up the memory. This usage count is encapsulated in the store object, but in essence it’s something like this:

    var store = {},
        counts = {};
    function Sound(attributes) {
      var id =;
      if (store[id]) {
        return store[id];
      store[id] = this;
      counts[id] = 1;
    Sound.prototype.release = function () {

    The reason for performing the cleanup on a timer, rather than whenever a usage count hits zero is so that the model stays in the store when you switch views. If you navigate to another page, there will be a single moment between cleaning up the existing views and setting up the new ones when every single model’s count is zero. The new page might actually contain views of one or more of these models, so it’d be quite wasteful to remove them instantly.

    A long journey…

    This has been a brief introduction into some of the methods and concepts we’re using to create Next SoundCloud, but it’s just the beginning. There are plenty more features which we have yet to build and therefore plenty more challenges to tackle. If you want join us along the way, remember that we’re always hiring!

  • November 21st, 2011 JavaScript Front-end JavaScript bug tracking By Yves

    Proper and effective error tracking is a common issue for front-end JavaScript code compared to back-end environments.

    We felt this pain as well and experimented with different solutions over the past months on the SoundCloud Mobile site.


    The first approach we had was to track errors with Google Analytics. Their library permits to fire custom events and whenever an ajax error would occur, we would log it.

    The biggest benefit of this tool is to monitor the stability of the site and its evolution in longer periods as you can easily go back a few weeks or months to see which events were triggered. Also, it is easy to implement – almost a one-liner!

    The drawback, at least for Google Analytics, is that this tool is not meant to track bugs. There is no way to add custom data to these events to get more insight about why and how an error happened, it also doesn’t work in real-time, and you obviously want that when you debug.

    So we kept Analytics in place for a long-term view, but took a look at other options for real-time and in-depth tracking.


    In our pursuit of getting more insight, we decided to take a look at Airbrake because we were already using it to track back-end errors on our main site.

    Our mobile site runs on Node.js, the first thing we did was to integrate an existing plugin for it to handle error tracking on the back-end as well.

    Looking a little further we found a front-end notifier, which would catch errors that would fire on window.onerror, but there was no way to report any custom errors.

    We decided to take a day to hack this on our own since their API is public and easy to implement.

    The benefits of Airbrake were instant. We could see what triggered which error, how, why, in which context, which browser, etc… in real-time!

    It also counts errors, which can help you prioritize and include fixes in your roadmap.

    However, the lack of filtering, grouping and custom sorting made it difficult to work with. There was also no sense of time or progress, as everything just gets dumped into a single list ordered by time. We needed something a little better than that.


    That’s when our Android team showed us their BugSense implementation. BugSense seemed to address all of these issues we had with Airbrake: grouping is more effective, searching and filtering is possible, charts of errors are drawn as well.

    There is one more benefit over Airbrake… JSON. No need to convert objects to XML strings anymore!

    If you are interested in our BugSense notifier you can find the source on github.


    There is still a lot of work needed to make front-end JS debugging as easy as it is for regular back-end environments. For example, stack traces today aren’t that useful, because of anonymous objects and minified code, but hopefully browser vendors will tackle these issues soon. Maybe Source Maps could be the first milestone in this quest.

    At SoundCloud, we will continue to use a combination of these tools because of the different strengths outlined above, but there are also other tools we didn’t try out yet like getexceptional or errorception. If you have tried these, or if you have any suggestion on this subject we’d like to get your feedback in the comments below.

    Happy debugging!

  • August 27th, 2010 API JavaScript Of CORS We Do By Thor

    If you’re a JavaScript head, we’ve got something for you. SoundCloud now supports Cross Origin Resource Sharing, using XMLHttpRequest. Or, to put it another way: no more implausible JSON-P hacks.

    Some background on CORS can be found here and here.  Our implementation is super-simple:  we let you do GET requests, for our public resources. Full documentation of the feature is on our wiki, but here’s a bit of code to get you started:

    var invocation = new XMLHttpRequest();
    // Internet Explorer uses a propritary object called XDomainRequest
    var url = '';
    function callOtherDomain() {
        if (invocation) {
  'GET', url, true);
            invocation.onreadystatechange = handler;

    As we’re just setting headers, the implementation was done as an addition to our Rack stack, which means that it’s easy for us to pull out or move around as needed. Once the appropriate headers are added, these newfangled modern browsers handle the rest.