Cloudifying images on svenska.yle.fi

Yle’s internal Image Management System (IMS) was recently renewed. It was a big leap forward for the site svenska.yle.fi to move all its images to the cloud, not only for the storage but for all the transformations as well.

Background
IMS is a custom, PHP based solution for storing images in a central place. It supports uploading and cropping images as well as managing image information such as tags, alt text and copyright. Images may be searched by tags or upload date. The system is multilingual, currently supporting English, Finnish and Swedish.

IMS was born about 5 years ago in December 2008, when the first version of it was launched. It was a quite simple looking tool for uploading and searching images. The workflow was to upload an image, enter its information such as tags and copyright, select its crop area(s) and save it. Then an editor would select the image while writing an article in SYND, FYND or any of the older now migrated sites. The image id is saved in the article, and the image is displayed via IMS. This way the same image may be reused in multiple articles.

Different image sizes for each image are needed depending on where the images are displayed on the site. IMS had a custom made, JavaScript based cropping tool for selecting the crop area of the image. The alternatives were to use the same crop area for all different image sizes, or to crop each size separately. The result was that we had 10 image files stored per uploaded image: The original plus the cropped version in nine different sizes ranging between 640x360px and 60x60px. All of there were in ratio 16:9 with the exception of the last one which was 1:1.

Cloudification
Along with Yle’s new Images API, the new version of IMS serves images from a cloud service. All transformations are done on the fly by specifying parameters in the image URL. Therefore, no actual image crops are performed on our servers anymore, we only save the crop coordinates (as a string in our database) and relay them to the cloud.

IMS also supports choosing crop areas for different image ratios now, instead of for different sizes. Available ratios to choose from are 16:9, 2:3 and 1:1.

When uploading an image, the image is first uploaded locally to our server. It is given a public id (used as a resource identifier by the cloud service), which, along with other information related to the image, is saved to our database. After that we tell Images API where the image is located and what public id it has. The image is fetched from it location and pushed to the cloud service. Now we can start using our image directly from the cloud, and that is exactly what we do next.

Once an image has been uploaded, the user is redirected to the image editor view. Already here, the image shown is served from the cloud and scaled down to an appropriate size just by adding a scale parameter in the image URL. The user may now define how the image should be cropped in each available ratio, enter some tags, alt-text etc. For increased SEO, we actually save the given tags and copyright text into the image file itself, in its IPTC data. This means, however, that each time the values are changed, the image has to be sent to the cloud again, replacing the old one.

Drupal integration
We have a Drupal module integrating with IMS in order to fetch images from it. In the Drupal frontend we initially always render a 300px wide image in order to show the users some image almost instantly, even though it would be very blurry as it might be scaled up. When the page load is ready we have a JavaScript going through all images and swapping them to a bigger version.

In the old days when we had those 9 different sizes available, it was hardcoded in the script that which size should be used where on the site.

With the cloud service in use we are able to utilize its on-the-fly image manipulations. Now our script actually looks up the size of the containing element of the image (e.g. the parent of the img) and renders an image in exactly that size. This is done simply by changing the size parameters given in the image URL. This enables us to control how large images we are serving just by changing the element size in the stylesheets.

The difference for tablet/mobile when we can select the best possible size for any resolution (click to view bigger version)

Challenges
One of the most challenging things we encountered was the fact that many images are 640x360px in size. That is the original image size! So how to show images that small in articles where we wanted an 880px wide image? We add an upscale effect.

Using the cloud service’s image manipulations, we take the original image, scale it up to the desired size and blur it as much as possible. Let’s call this our canvas. Then we put the original image in its original size on the canvas that we just made. The result is that it looks like our image got blurred borders. The same kind of technique is used on TV when showing old, 4:3 clips in 16:9 format.

We ran into a few bugs in open-source libraries we used. We decided to ditch the custom crop tool and use open-source Jcrop library instead. There was an issue when using a fixed aspect ratio together with a minimum or maximum allowed height of the crop area. We fixed the bug in our GitHub fork and created a pull request to get the fix contributed.

Also when using the cloudinary_php, the PHP library for the cloud service, we noticed a flaw in the logic. When specifying an image to be cropped according to specific coordinates, zero values were not allowed. This prevented any crops to be made e.g. from the top left corner of an image (where both X and Y are 0). The bug was fixed in our fork and merged via our pull request into the library.

Migration
Another challenge was that we had over 160 000 images with a total file size of somewhere around 400GB. For all these, we needed to a) generate a public id, b) upload to the cloud and c) save the image version number, given from the cloud as a response to the upload, in our database.

Of course we had to do this programatically. With a quite simple script we uploaded a sample batch of images. The script read X amount of rows from the database and looped through them, processing one image at the time. The idea was good, but according to our calculations the migration would have taken about 29 days to finish.

We then thought of having multiple instances of the script running simultaneously to speed things up. Again, the idea was good, but we would run into some conflict issues when all the scripts would try to read and write against the same database, let along the same table in the database.

Our final solution was to utilize a message queue for the migration. We chose to use RabbitMQ as our queue, and implemented the Pekkis Queue library as an abstraction layer between the queue and our scripts.

This way we could enqueue all the images to be processed and simultaneously run multiple instances of our script and be sure that each image was processed only once. The migration took all in all about 20 hours.

Written by Rasmus Werling
Rasmus “Rade” Werling has worked with Drupal development for 5 years. He’s speciality’s are backend coding and coming up with creative solutions to problems. He has contributed Drupal modules of his own and loves to take on challenges.