Cloudifying images on svenska.yle.fi

Yle’s internal Image Management System (IMS) was recently renewed. It was a big leap forward for the site svenska.yle.fi to move all its images to the cloud, not only for the storage but for all the transformations as well.

Background
IMS is a custom, PHP based solution for storing images in a central place. It supports uploading and cropping images as well as managing image information such as tags, alt text and copyright. Images may be searched by tags or upload date. The system is multilingual, currently supporting English, Finnish and Swedish.

IMS was born about 5 years ago in December 2008, when the first version of it was launched. It was a quite simple looking tool for uploading and searching images. The workflow was to upload an image, enter its information such as tags and copyright, select its crop area(s) and save it. Then an editor would select the image while writing an article in SYND, FYND or any of the older now migrated sites. The image id is saved in the article, and the image is displayed via IMS. This way the same image may be reused in multiple articles.

Different image sizes for each image are needed depending on where the images are displayed on the site. IMS had a custom made, JavaScript based cropping tool for selecting the crop area of the image. The alternatives were to use the same crop area for all different image sizes, or to crop each size separately. The result was that we had 10 image files stored per uploaded image: The original plus the cropped version in nine different sizes ranging between 640x360px and 60x60px. All of there were in ratio 16:9 with the exception of the last one which was 1:1.

Cloudification
Along with Yle’s new Images API, the new version of IMS serves images from a cloud service. All transformations are done on the fly by specifying parameters in the image URL. Therefore, no actual image crops are performed on our servers anymore, we only save the crop coordinates (as a string in our database) and relay them to the cloud.

IMS also supports choosing crop areas for different image ratios now, instead of for different sizes. Available ratios to choose from are 16:9, 2:3 and 1:1.

When uploading an image, the image is first uploaded locally to our server. It is given a public id (used as a resource identifier by the cloud service), which, along with other information related to the image, is saved to our database. After that we tell Images API where the image is located and what public id it has. The image is fetched from it location and pushed to the cloud service. Now we can start using our image directly from the cloud, and that is exactly what we do next.

Once an image has been uploaded, the user is redirected to the image editor view. Already here, the image shown is served from the cloud and scaled down to an appropriate size just by adding a scale parameter in the image URL. The user may now define how the image should be cropped in each available ratio, enter some tags, alt-text etc. For increased SEO, we actually save the given tags and copyright text into the image file itself, in its IPTC data. This means, however, that each time the values are changed, the image has to be sent to the cloud again, replacing the old one.

Drupal integration
We have a Drupal module integrating with IMS in order to fetch images from it. In the Drupal frontend we initially always render a 300px wide image in order to show the users some image almost instantly, even though it would be very blurry as it might be scaled up. When the page load is ready we have a JavaScript going through all images and swapping them to a bigger version.

In the old days when we had those 9 different sizes available, it was hardcoded in the script that which size should be used where on the site.

With the cloud service in use we are able to utilize its on-the-fly image manipulations. Now our script actually looks up the size of the containing element of the image (e.g. the parent of the img) and renders an image in exactly that size. This is done simply by changing the size parameters given in the image URL. This enables us to control how large images we are serving just by changing the element size in the stylesheets.

The difference for tablet/mobile when we can select the best possible size for any resolution (click to view bigger version)

Challenges
One of the most challenging things we encountered was the fact that many images are 640x360px in size. That is the original image size! So how to show images that small in articles where we wanted an 880px wide image? We add an upscale effect.

Using the cloud service’s image manipulations, we take the original image, scale it up to the desired size and blur it as much as possible. Let’s call this our canvas. Then we put the original image in its original size on the canvas that we just made. The result is that it looks like our image got blurred borders. The same kind of technique is used on TV when showing old, 4:3 clips in 16:9 format.

We ran into a few bugs in open-source libraries we used. We decided to ditch the custom crop tool and use open-source Jcrop library instead. There was an issue when using a fixed aspect ratio together with a minimum or maximum allowed height of the crop area. We fixed the bug in our GitHub fork and created a pull request to get the fix contributed.

Also when using the cloudinary_php, the PHP library for the cloud service, we noticed a flaw in the logic. When specifying an image to be cropped according to specific coordinates, zero values were not allowed. This prevented any crops to be made e.g. from the top left corner of an image (where both X and Y are 0). The bug was fixed in our fork and merged via our pull request into the library.

Migration
Another challenge was that we had over 160 000 images with a total file size of somewhere around 400GB. For all these, we needed to a) generate a public id, b) upload to the cloud and c) save the image version number, given from the cloud as a response to the upload, in our database.

Of course we had to do this programatically. With a quite simple script we uploaded a sample batch of images. The script read X amount of rows from the database and looped through them, processing one image at the time. The idea was good, but according to our calculations the migration would have taken about 29 days to finish.

We then thought of having multiple instances of the script running simultaneously to speed things up. Again, the idea was good, but we would run into some conflict issues when all the scripts would try to read and write against the same database, let along the same table in the database.

Our final solution was to utilize a message queue for the migration. We chose to use RabbitMQ as our queue, and implemented the Pekkis Queue library as an abstraction layer between the queue and our scripts.

This way we could enqueue all the images to be processed and simultaneously run multiple instances of our script and be sure that each image was processed only once. The migration took all in all about 20 hours.

Written by Rasmus Werling
Rasmus “Rade” Werling has worked with Drupal development for 5 years. He’s speciality’s are backend coding and coming up with creative solutions to problems. He has contributed Drupal modules of his own and loves to take on challenges.

Replacing Drupal search with SOLR

There has been a need to replace Drupal’s core search with Apache SOLR in Svenska YLE for a quite some time. Before I could begin implementation, we needed to decide which Drupal modules we would select to handle the issue. There were really only two options: the Apache SOLR and Search API modules. Search API was already familiar to us and there was better Views support for our purposes, making it the obvious choice from the very beginning. At this point, we haven’t even done any actual comparison between Search API and Apache SOLR.

We already had an Apache SOLR test environment on YLE’s internal network, so we only needed to discuss how to work with the Apache SOLR service on the local environments of the developers. We could either use a local virtual SOLR environment (e.g. VirtualBox) or we could use an external service that could be accessed from anywhere. Using SOLR service within YLE’s internal network was out of the question because the development environment service needs to be functional outside of YLE’s network.

We investigated some of the external SOLR services available, but we finally chose to use local virtual SOLR environments. The main problem with this was how to ensure that all developers would have the exactly the same development environment and how to ensure that the development environment would be similar to what exists in the production environment. After a few trials and errors, Vagrant-box gave us the solution to this problem. I will not go any further into the subject of Vagrant at this point, except to say that Vagrant is the perfect tool for managing environments.

Once the modules and environments were selected, the actual implementation work could begin. We were using SOLR 3.x in both production and test environments so I needed to get a similar environment set up in my local environment. I found a ready-made vagrant-solr-box on github, so I decided to try that first. The environment worked just fine, so I continued the implementation using that.

I installed the Search API and Search API SOLR modules and also the Search API SOLR Overrides module for overriding SOLR server settings in different environments. Configuring Search API in Drupal was already a familiar procedure to me, and everything proceeded very smoothly. I began by configuring the Search API SOLR server and index. I replaced the content listing pages with the help of the Search API Views module, and everything seemed to work nicely on my local environment. We were now ready to move everything to the test environment, where a “real” Apache SOLR environment was waiting for us. All we needed was a new SOLR core for our site.

As I mentioned, everything had proceeded reasonably well, so far, but in the test environment, we started to run into some problems. First, Drupal wasn’t able to connect to the Apache SOLR server. By adjusting the proxy settings, we were able to resolve this issue, but Search API still just wasn’t working with the multicore Apache SOLR on the test environment. Indexing was successful on our local virtual environments, but these had a single-core SOLR server. The configuration that had worked just fine on my local environment didn’t work at all in the test environment, even though both were using the same version of Apache SOLR.

To solve the problem, we started by installing Vanilla Drupal on the test environment with the same modules in use on the actual site. By doing this, we were able to exclude any possible problems that might be caused by our own installation profile and features. Search API was not indexing content on this new test site, either, so we decided to try upgrading SOLR. We upgraded SOLR from version 3.6 to 4.4, and at the same time we updated schema.xml to support the latest Search API and Apache SOLR modules. This resolved the problem, the test site was able to index content to SOLR, so we configured the actual site and indexing started working there, as well.

We were very relieved when this adventure was finally over. A task that initially had seemed easy didn’t turn out to be quite so easy after all, as these things usually go, but there is no greater joy than when everything works out in the end.

With the SOLR index we have been able to replace most of the taxonomy listing pages, and this has meant a reduction on the processor load (on the database server) – especially in views that have depth enabled. The next thing to looking into is to remove the standard Drupal search index, to get a smaller database.

Written by Ari Ruuska
Ari has worked with Drupal developing about 7 years. Most of that time as a consultant at YLE as Drupal Developer and architect. He has also managed Drupal projects and developers team.