Changing a Drupal pathauto alias pattern with working redirects

So we recently found out we’ve been a bit boneheaded with the pathauto alias to our Freebase taxonomy pages. The pattern was, where 1234 was the term ID (tid) in Drupal.

This is a stupid alias pattern, since it makes our urls obfuscated – impossible to determine from the outside without access to our Drupal database. Also it feels like bad semantical form, because the url contains something that’s meaningless to most people. So we wanted to change the pathauto alias to use the unique Freebase ID instead.

Here it is important to remember that the best looking url naturally would be just, but because of disambiguation (dealing with the case where a word can have many different meanings, such as ”mercury”, which can be an element, a Roman god, a planet and many other things) we want to guarantee that a url is unique.

If we look up the word politics on Freebase, we find that its unique Freebase ID is /m/05qt0 and so we would like our url to have the form

Using our own Freebase module (which may be released to at a later time) has put a field called ”field_freebase_freebaseid” under a taxonomy vocabulary called ”freebase”. This means we have access to the token [term:field-freebase-freebaseid] and that makes the whole pattern for Freebase taxonomy term listings the following:


The problem

The problem is that when we change the url alias pattern we want to leave the old alias intact and redirect from it to the new one. This functionality is built into the pathauto module: you can open up a taxonomy term for editing, save it and the new alias will be generated and the old one made into a redirect.

However, we have 6 000 Freebase terms and it would take a day to open up them all and save them to get the new alias with a redirect. It seems fortunate then, that we have a bulk update feature in the pathauto module. Bulk update generates aliases for all entities in a specific content type. Unfortunately, bulk update only works on entities, in our case taxonomy terms, that don’t yet have a pathauto alias. What you have to do is delete all current aliases and then start the bulkupdate, which will generate new aliases using the new pattern. If we start by deleting all current aliases, no redirects can be created! Here are some articles and threads discussing this very issue. Apparently it’s been a problem for around four years:

Basically, if you’ve created thousands of pathauto aliases that have been indexed by Google and need to exist as redirects to the new alias, you’re out of luck! This seems like an incomprehensible oversight and part of me thinks I must’ve missed something, because this isn’t acceptable.

The solution

Searching the web has given us several ideas about how to deal with this issue, but most require some kind of manual hacking of the database, which doesn’t really sound like something we want to do.

Instead, we ended up writing a simple drush script that just loads all terms in a taxonomy vocabulary (”freebase” in our example, but the script could easily be modified to take a command line parameter). Writing the script took about a third of the time it took to write this blog text, so hopefully at least two other Drupal-users will find this beneficial.

I am assuming you are familiar with Drush scripts, but to briefly explain, assuming your module is named ”freebase”, you can just create a file called ”” in the same folder and when you activate your module, your file will be autoloaded as an available drush script.

The code

The definition of the Drush-command:

function ydd_api_drush_command() {
  $items = array();
  $items['yle-resave-freebase-terms'] = array(
    'description' => "Loads all Freebase terms and saves them to force Drupal to create new pathauto aliases.",
    'callback' => 'drush_fix_freebase',
    'aliases' => array('yrft'),
  return $items;

The accompanying function that does the actual work:

function drush_fix_freebase() {
  // Find freebase vocabulary id.
  $fbvoc = taxonomy_vocabulary_machine_name_load('freebase');
  if (!empty($fbvoc)) {
    $vid = $fbvoc->vid;
    // Get a list of all terms in that vocabulary.
    $tree = taxonomy_get_tree($vid);
    $counter = 0;
    // Loop through all term ids, load and save.
    foreach ($tree as $t) {
      drush_print('Fixing '.$t->tid.' ['.++$counter.' / '.count($tree).']');
      // Load and save term to refresh its pathauto alias.
      $term = taxonomy_term_load($t->tid);

Finally, you need to run the script which will tell you how many terms have been processed out of how many. The command is:

drush yrft

After running this, it’s easy to verify that all terms have indeed received new aliases and a redirect from the old alias.

Cloudifying images on

Yle’s internal Image Management System (IMS) was recently renewed. It was a big leap forward for the site to move all its images to the cloud, not only for the storage but for all the transformations as well.

IMS is a custom, PHP based solution for storing images in a central place. It supports uploading and cropping images as well as managing image information such as tags, alt text and copyright. Images may be searched by tags or upload date. The system is multilingual, currently supporting English, Finnish and Swedish.

IMS was born about 5 years ago in December 2008, when the first version of it was launched. It was a quite simple looking tool for uploading and searching images. The workflow was to upload an image, enter its information such as tags and copyright, select its crop area(s) and save it. Then an editor would select the image while writing an article in SYND, FYND or any of the older now migrated sites. The image id is saved in the article, and the image is displayed via IMS. This way the same image may be reused in multiple articles.

Different image sizes for each image are needed depending on where the images are displayed on the site. IMS had a custom made, JavaScript based cropping tool for selecting the crop area of the image. The alternatives were to use the same crop area for all different image sizes, or to crop each size separately. The result was that we had 10 image files stored per uploaded image: The original plus the cropped version in nine different sizes ranging between 640x360px and 60x60px. All of there were in ratio 16:9 with the exception of the last one which was 1:1.

Along with Yle’s new Images API, the new version of IMS serves images from a cloud service. All transformations are done on the fly by specifying parameters in the image URL. Therefore, no actual image crops are performed on our servers anymore, we only save the crop coordinates (as a string in our database) and relay them to the cloud.

IMS also supports choosing crop areas for different image ratios now, instead of for different sizes. Available ratios to choose from are 16:9, 2:3 and 1:1.

When uploading an image, the image is first uploaded locally to our server. It is given a public id (used as a resource identifier by the cloud service), which, along with other information related to the image, is saved to our database. After that we tell Images API where the image is located and what public id it has. The image is fetched from it location and pushed to the cloud service. Now we can start using our image directly from the cloud, and that is exactly what we do next.

Once an image has been uploaded, the user is redirected to the image editor view. Already here, the image shown is served from the cloud and scaled down to an appropriate size just by adding a scale parameter in the image URL. The user may now define how the image should be cropped in each available ratio, enter some tags, alt-text etc. For increased SEO, we actually save the given tags and copyright text into the image file itself, in its IPTC data. This means, however, that each time the values are changed, the image has to be sent to the cloud again, replacing the old one.

Drupal integration
We have a Drupal module integrating with IMS in order to fetch images from it. In the Drupal frontend we initially always render a 300px wide image in order to show the users some image almost instantly, even though it would be very blurry as it might be scaled up. When the page load is ready we have a JavaScript going through all images and swapping them to a bigger version.

In the old days when we had those 9 different sizes available, it was hardcoded in the script that which size should be used where on the site.

With the cloud service in use we are able to utilize its on-the-fly image manipulations. Now our script actually looks up the size of the containing element of the image (e.g. the parent of the img) and renders an image in exactly that size. This is done simply by changing the size parameters given in the image URL. This enables us to control how large images we are serving just by changing the element size in the stylesheets.

The difference for tablet/mobile when we can select the best possible size for any resolution (click to view bigger version)

One of the most challenging things we encountered was the fact that many images are 640x360px in size. That is the original image size! So how to show images that small in articles where we wanted an 880px wide image? We add an upscale effect.

Using the cloud service’s image manipulations, we take the original image, scale it up to the desired size and blur it as much as possible. Let’s call this our canvas. Then we put the original image in its original size on the canvas that we just made. The result is that it looks like our image got blurred borders. The same kind of technique is used on TV when showing old, 4:3 clips in 16:9 format.

We ran into a few bugs in open-source libraries we used. We decided to ditch the custom crop tool and use open-source Jcrop library instead. There was an issue when using a fixed aspect ratio together with a minimum or maximum allowed height of the crop area. We fixed the bug in our GitHub fork and created a pull request to get the fix contributed.

Also when using the cloudinary_php, the PHP library for the cloud service, we noticed a flaw in the logic. When specifying an image to be cropped according to specific coordinates, zero values were not allowed. This prevented any crops to be made e.g. from the top left corner of an image (where both X and Y are 0). The bug was fixed in our fork and merged via our pull request into the library.

Another challenge was that we had over 160 000 images with a total file size of somewhere around 400GB. For all these, we needed to a) generate a public id, b) upload to the cloud and c) save the image version number, given from the cloud as a response to the upload, in our database.

Of course we had to do this programatically. With a quite simple script we uploaded a sample batch of images. The script read X amount of rows from the database and looped through them, processing one image at the time. The idea was good, but according to our calculations the migration would have taken about 29 days to finish.

We then thought of having multiple instances of the script running simultaneously to speed things up. Again, the idea was good, but we would run into some conflict issues when all the scripts would try to read and write against the same database, let along the same table in the database.

Our final solution was to utilize a message queue for the migration. We chose to use RabbitMQ as our queue, and implemented the Pekkis Queue library as an abstraction layer between the queue and our scripts.

This way we could enqueue all the images to be processed and simultaneously run multiple instances of our script and be sure that each image was processed only once. The migration took all in all about 20 hours.

Written by Rasmus Werling
Rasmus “Rade” Werling has worked with Drupal development for 5 years. He’s speciality’s are backend coding and coming up with creative solutions to problems. He has contributed Drupal modules of his own and loves to take on challenges.

Yle Vox first test boradcasting

Hello world

On monday we will have our first public test for our Voice Operated eXchange our switch.

What we have done is in a radiostudio splitted the microfones and connected the signal to a Shure automixer that recognize when there is sound in the mic. The automixer gives a GPIO to an Arduino that with Skjaarhojs libaries gives a comad to an ATEM TVS to cut to a camera. As cameras we use GoPros (3 BE and 3+). When the signallight in  the studio goes off the swither goes to an input with CasparCG(server 2.0.7beta) where we run a an html-page with a longshot-picture over it.

The resolution for the system is set to 720p and the list of equipment used are

– 1 passive microfonesplitter
– 1 Shure audio automixer
– 1 Arduino Uno+ethernet shield
– 1 BMD ATEM Television Studio
– 1 CasparCG server (HP Z400 + 2 BMD Decklink SDI)
– 5 GoPro-cameras
– 1 Datavideo DAC-70 (splitting Longshot GoPro to mixer and CasparCG)

We will as soon as our test is competed put out the exact specs including schematics and Arduino-code.

The test broadcastes can (hopefully) be seen at  (seek for program named ”Succémorgon”)

monday 31.3 until thursday 3.4 at 4.00-8.00 CET (6-10 finnish time).

On fridaymorning I will try to use the Vox as a secondery videoswitcher..

best markus

Two teams, two sites – one year with a shared Drupal distro at Yle

Today one year ago FYND (Finnish Yles New Drupal) launched it’s first site. Read about the work that went into the preparations here.

It has been a year of shared development when we really have seen the advantages of working on the same distro, and working with open source.

One of the first things we decided on was to work as two teams in a similar way to how NBC Universal works with their Drupal sites (or cells if you want to compare to how Supercell works).

”Clash of Clans and Hay Day were each developed by teams of just five developers. Even now they are global smashes each has a team no bigger than 15.” Ilkka Paananen in Wired Magazine

Is Five the Optimal Team Size? …the cost per function point of a team of size 7 was $566 and that of a team of size 14 was $2970 – Jeff Sutherland…If you have 3 team members, then you will have 4 communication channels, if you have 4 then you have 9. I think the formula is m-1^2. In my opinion, a small team of 4 or 5 is ideal. – PMHut

One idea that also was put forward was that we should work as one big team (quite common in big organisations I guess). We did however pursue the idea of having two teams, since we assumed this would make us more focused and productive. It would also allow us to better understand the business goals and needs if we got them independently from two sources (Svenska Yle and Yle Luovat sisällöt).

Issues in YDD

One year on we are happy with the end results, as the distro has continued steadily on its development path.

The main advantages with two teams and two sites have been:

  • Twice the amount of tasks have gotten done – everyone benefits
  • Maintaing the distro core is a shared task
  • You focus on the task at hand – not a zillion tasks all over the place
  • When there has been a module problem, there has been a clone site to compare with
  • We can mitigate the risk when a new function or module is activated, as we can share the burden by activating it on one of the sites before activating it on the other
  • It has been possible to compare notes, and get an “external” view on different tasks
  • External human & tech resources can be shared between the teams
  • Cross reviewing (using pull requests)
  • Documentation and best practices are better maintained as you clearly see the need and gain. In our case two semi separated teams provide better quality assurance than before
    • One observation is that quality assurance has gone up. One theory we have discussed is that it is because the teams see the other team as an outside party. The team members become aware that someone else will suffer if they write bad code or do not test their code. Just like in open source development where many eyes check the code, we are doing the same on a low level.

Main disadvantages:

  • Time is needed for co-ordinating
  • Time is needed to separate the settings for the two sites, but this would be needed just by the fact that we are running sites with two different languages
  • Risk for conflicting business goals – but this has been kept at bay by clearly stating that the purpose of the system is to provide articles publishing according to subjects.
  • If you break something you will get blamed

Some of the functionality we have shared over the last year:

  • API connections
  • New version of the Image Management System
  • New meta data solutions for Finto and Freebase

The structure we made on the technical side has worked out nicely, and we have not run into any problems that we have been unable to solve.

We have also found that the business goals are quite similar, and it has not been a problem to agree on changes in the shared core functions.

There has also been spillover in other areas. The content teams can check out what the other teams are doing and get inspired to do the same in their own site.


When there is a request for a new feature the product owner of checks with the product owner at (or vice versa) if they also are interested. If they are we will make it for both SYND and FYND (YDD wide). Depending on the type of issue and how important it is for the sites we will negotiate about which team builds it. Maintenance issues have been shared 50/50.

If it is not a shared goal it will be made for the site that requested it – or it might not get made at all. This is a good checking point, if the feature is not of interest in another site that has almost the same functionality it might not be such a crucial feature after all.

We have a shared issue backlog, but we separate the issues as for both SYND and FYND, or only one of them.

In one month we will celebrate two years with SYND. More about that later.

Ny version av IMS på

Efter fem år och 165 000 bilder är det dags att ge IMS (Image management system) som används på ett ansiktslyft. Den nya IMS-versionen har nya funktioner och ett nytt utseende. Om allt går enligt plan tas den i bruk onsdag 26.3.

För alla användare betyder det vissa förändringar, här kommer några skärmdumpar och info om förändringar:

Man öppnar IMS på samma sätt som tidigare
Man öppnar IMS på samma sätt som tidigare. Nu kan man också öppna bilden för redigering genom att klicka på bilden.
Så här ser första vyn ut där man kan välja vad man vill göra
Så här ser första vyn ut där man kan välja vad man vill göra
Bläddra och välj bild
Bläddra och välj bild
Vy man ser då en ny bild har laddats upp
Vy man ser då en ny bild har laddats upp

Om den bild man laddar upp inte är tillräckligt stor varnar systemet om detta. Minimistorlek är 1600×900. Systemet tillåter dock allt som är större än 640×360 px.

Det finns skäl att alltid eftersträva en så stor bild som möjligt. På börjar vi nu använda en bild som är 880×495 px. Om bilden inte uppfyller den storleken kommer vi att förstora den artificiellt. Man kan jämföra med hur man gör med 4:3 bilder som sänds i tv som 16:9.

En annan aspekt är att högupplösta skärmar tas i bruk i allt större utsträckning. Det gör att vi väldigt snart är tvungna att ta i bruk en ännu större storlek fast vår bildbank i IMS inte har tillräckligt stora bilder. Ladda därför alltid upp bilden i största möjliga storlek.

Beskärning går nu att göra i 16:9-, 1:1- och 2:3-format
Beskärning går nu att göra i 16:9-, 1:1- och 2:3-format

Till de bildversioner som besökarna på ser använder vi en automatiskt ansiktsigenkännings-funktion ifall en manuell beskärning saknas, vilket fungerar i 70-90% av fallen. Det ersätter med andra ord inte att en människa beskurit bilden. Då du skall beskära de enskilda formaten klickar du på respektive miniatyrversion (som automatiskt visar en förhandsversion av slutresultatet).

Om beskärningar inte gjorts så ser bilden ut så här för administratorer, så att man skall ha en möjlighet att se att bilden inte är beskärd.
Om beskärningarna saknas ser bilden ut så här för administratörer. Orsaken är att man då ser att bilden behöver beskäras.

Då du väljer en bild skall du alltid kontrollera att beskärningen är korrekt. I föregående version av IMS sparades inte beskärningsdatan eftersom beskärningarna sparades direkt i den enskilda bildfilen. Det ledde till att vi inte kunde återskapa hur bilden varit klippt tidigare. Oberoende av det så behöver en människa kontrollera 1:1- och 2:3-formaten som är helt nya och saknar beskärning.

Detta gör att en okänd procentandel av våra bilder nu är klippta på ett icke kontrollerat sätt (skedde per automatik vid migreringen).

Om du ser en bild som behöver korrigeras kan du öppna den i IMSen genom att gå in och redigera artikeln. Väl inne i artikeln öppnar du den specifika bilden i IMSen genom att klicka på bilden.

De enskilda fälten har fått förklaringar:

För att andra skall hitta din bild lönar det sig att  skriva bra taggar. Separera taggarna med kommatecken,

För att personer som har en synnedsättning också skall veta vad som visas på bilden är ALT-fältet obligatoriskt.

För att personer som har en synnedsättning också skall få veta vad som visas på bilden är ALT fältet obligatoriskt.

Detta fält var tidigare inte obligatoriskt. Om du redigerar en gammal bild blir du därför tvungen att skriva beskrivningen.

Till de nya funktionerna hör:

  • I de situationer som av layoutmässigaskäl behöver veta bildens format och storlek kommer 16:9 formatet att användas
  • Exempel: En bild som du laddar upp kommer alltid att finnas också som 16:9. Om bilden är satt som huvudmedia i en artikel kommer den med nuvarande layout aldrig att visas som 1:1 eller 2:3. 1:1-, original- och 2:3-beskärningen kan du använda inne i en artikels brödtext genom att då du sätter in bilden välja en av dem.
  • Inne i artiklar kan man välja att en bild är 1:1, 16:9, originalformat eller 2:3
  • Inne i artiklar kan man välja att en bild är viktig, den kommer då att visas i ett större format än en bild som är insatt som standard
  • Du kan ladda ner en bild i den storlek som Arenan behöver direkt från IMS
  • Bilderna levereras nu via en CDN (Content Distrubution Network) som gör att bilden levereras snabbare för att den kommer från ett datacenter som är nära användaren
  • Rotera bilden direkt i IMS
  • Bilderna kan visas i vilken pixel-storlek som helst, vilket gör att vi kan skapa bättre användarupplevelser. Vi kan ladda bilder exakt så stora som de på riktigt behöver vara, och användaren behöver inte ladda ner en för tung bild.
  • IMS kan stänga av visningen av alla bilder i om vi av ngn orsak måste minska på hela sajtens vikt. Kan bli aktuellt t.ex. vid en DoS-attack.
  • ”Ersätt bilden”-funktionen beaktar att bildens bredd/höjd kan ha ändrat

De första dagarna kommer bilderna på och admin att vara lite långsammare, men snabbas upp vartefter fler och fler av bilderna har visats och lagras i CDN:ens cache.

Tack till Rasmus och Tero för deras insatser.



I started to work at Yle over a year ago as a full-time HTML5 game developer. With prior experience in HTML5 game development I chose to start developing with the Impact Game Engine since it had the features that matched the project goals at the time.

I soon realised that Impact lacked in features, it fell short on features like mobile support, tweening engine, particle engine, Web Audio, Spine animation, Retina support etc. Since I really liked the architecture of Impact, I started to develop the missing features as a plugins to the engine. At the time when I was building plugins for Impact I discovered a new 2D renderer engine called Pixi.js, that really impressed me with it’s performance on both desktop and mobile. Unlike Impact, Pixi also supported WebGL, which will speed up games even more in the future since it can utilize the graphics processor for rendering. Instantly I found myself implementing the Pixi renderer and utilizing it in my projects.

After a year of developing for Yle BUU with Impact, I noticed that i could turn my Impact plugins and all my knowledge into a whole new game engine, that’s when I started to work on Panda.js engine.

After some discussion I got a green light from Yle to release Panda as an open source project and on the 11th of February, I released the Panda website and source code at GitHub. With the engine, I also released open source Flappy Bird clone called Flying Dog as a technology demo.

Two weeks after the release, Flying Dog has reached over 1.4M gameplays, the Panda website has had 6.600 unique visitors (most from US and Russia), dozens of tweets and 150 stars on GitHub.

Go ahead and thrash your keyboard, test it out and create something mad with it and contribute. I will be updating, bugfixing and continue the development of Panda, but the development will primarily be centered around the needs of the Yle BUU projects.

Improve workflow & UI by making a Entity Reference View with multiple fields

User story: Help editors pick the correct representation of an article when connecting it to be displayed in a new department.

Background: We noticed that editors where picking the wrong representation of an article as there can be many of them. The reason for this being that we let editors customize the representation of their article depending on the department where it is being used.

Solution: Change the Entity Reference (usually a simple entity selection field) to a Entity Reference View.

Step by step: 1. Go and create a new view, add a Entity Reference display, and add the fields (filters etc just like a regular view) you want to display to the editors. If you select more than one field you will need to specify the ”Search fields” (Format –> Settings). In our case we added the date and department title. We are also thinking of setting a sort criteria DESC as it is likely most editors are looking for recently published content.

2. Go to your content type, and edit the field that is a Entity Reference. Change the reference selection to a view, and then pick the view you created. In ”View used to select the entities” select the display you made.

Entity reference

3. This is the end result with some additional CSS work:

Styled entity reference

Just adding the fields would have improved the editorial workflow, but just a little bit of CSS helped to make it even more usable. I removed the default wrappers and added some custom CSS classes for the fields. This way I was able to adjust the styling of the title, date and department.

.reference-autocomplete {
  border-bottom: 1px solid #f1f1f1 !important;
  padding: 3px 2px;
  border-color: transparent !important;
.reference-autocomplete {
  font-weight: bold;
.reference-autocomplete {
  color: #555;
  font-size: 0.8em;
.reference-autocomplete:hover {
  color: #f1f1f1;
.reference-autocomplete:hover, .reference-autocomplete:hover, .reference-autocomplete:hover {
  background-color: #f3f4ee;
  border-radius: 3px;
  color: #0072b9;
  padding: 1px 2px;
.reference-autocomplete, .reference-autocomplete, .reference-autocomplete{
  color: #555;
  font-size: 0.7em;
  font-weight: bold;
  text-transform: uppercase;

Decided to style all Entity References in the admin theme with a bit more padding and a border-bottom line as it improves readability.

When trying to inspect the autocomplete div I noticed it was quite difficult to grab it via the inspect element function, but grabbing it via ”Copy as HTML” worked. This is what the basic markup looks like on a regular Entity Reference Simple field.

<div id="autocomplete"><ul><li><div><div class="reference-autocomplete">Österbotten</div></div></li><li><div><div class="reference-autocomplete">Kontakta Yle Österbotten</div></div></li><li><div><div class="reference-autocomplete">Lyssna på Radio Vega Österbotten!</div></div></li><li><div><div class="reference-autocomplete">Valet i Österbotten</div></div></li></ul></div>

The class-name of the line you hover on is named ”selected”.

Thanks to Olli Vesslin.

Replacing Drupal search with SOLR

There has been a need to replace Drupal’s core search with Apache SOLR in Svenska YLE for a quite some time. Before I could begin implementation, we needed to decide which Drupal modules we would select to handle the issue. There were really only two options: the Apache SOLR and Search API modules. Search API was already familiar to us and there was better Views support for our purposes, making it the obvious choice from the very beginning. At this point, we haven’t even done any actual comparison between Search API and Apache SOLR.

We already had an Apache SOLR test environment on YLE’s internal network, so we only needed to discuss how to work with the Apache SOLR service on the local environments of the developers. We could either use a local virtual SOLR environment (e.g. VirtualBox) or we could use an external service that could be accessed from anywhere. Using SOLR service within YLE’s internal network was out of the question because the development environment service needs to be functional outside of YLE’s network.

We investigated some of the external SOLR services available, but we finally chose to use local virtual SOLR environments. The main problem with this was how to ensure that all developers would have the exactly the same development environment and how to ensure that the development environment would be similar to what exists in the production environment. After a few trials and errors, Vagrant-box gave us the solution to this problem. I will not go any further into the subject of Vagrant at this point, except to say that Vagrant is the perfect tool for managing environments.

Once the modules and environments were selected, the actual implementation work could begin. We were using SOLR 3.x in both production and test environments so I needed to get a similar environment set up in my local environment. I found a ready-made vagrant-solr-box on github, so I decided to try that first. The environment worked just fine, so I continued the implementation using that.

I installed the Search API and Search API SOLR modules and also the Search API SOLR Overrides module for overriding SOLR server settings in different environments. Configuring Search API in Drupal was already a familiar procedure to me, and everything proceeded very smoothly. I began by configuring the Search API SOLR server and index. I replaced the content listing pages with the help of the Search API Views module, and everything seemed to work nicely on my local environment. We were now ready to move everything to the test environment, where a “real” Apache SOLR environment was waiting for us. All we needed was a new SOLR core for our site.

As I mentioned, everything had proceeded reasonably well, so far, but in the test environment, we started to run into some problems. First, Drupal wasn’t able to connect to the Apache SOLR server. By adjusting the proxy settings, we were able to resolve this issue, but Search API still just wasn’t working with the multicore Apache SOLR on the test environment. Indexing was successful on our local virtual environments, but these had a single-core SOLR server. The configuration that had worked just fine on my local environment didn’t work at all in the test environment, even though both were using the same version of Apache SOLR.

To solve the problem, we started by installing Vanilla Drupal on the test environment with the same modules in use on the actual site. By doing this, we were able to exclude any possible problems that might be caused by our own installation profile and features. Search API was not indexing content on this new test site, either, so we decided to try upgrading SOLR. We upgraded SOLR from version 3.6 to 4.4, and at the same time we updated schema.xml to support the latest Search API and Apache SOLR modules. This resolved the problem, the test site was able to index content to SOLR, so we configured the actual site and indexing started working there, as well.

We were very relieved when this adventure was finally over. A task that initially had seemed easy didn’t turn out to be quite so easy after all, as these things usually go, but there is no greater joy than when everything works out in the end.

With the SOLR index we have been able to replace most of the taxonomy listing pages, and this has meant a reduction on the processor load (on the database server) – especially in views that have depth enabled. The next thing to looking into is to remove the standard Drupal search index, to get a smaller database.

Written by Ari Ruuska
Ari has worked with Drupal developing about 7 years. Most of that time as a consultant at YLE as Drupal Developer and architect. He has also managed Drupal projects and developers team.

Video på nätet gör revolution

Videokvalitet på nätet utvecklas just nu i rask takt. För några år sedan fick vi nöja oss med resolutioner på 240p, 360p eller 480p. I dag har 1080p blivit en allmän standard medan vi med god fart är på väg mot 2160p.


Siffrorna står för antalet vertikala bildpunkter, så att exempelvis 1080p betyder 1080
bildpunkter lodrätt och 1920 bildpunkter vågrätt. Resolutionen 4K, även kallad Ultra HD
eller UHD, är fyra gånger så hög, det vill säga 3840 gånger 2160 bildpunkter.

Videotjänsten Youtube erbjuder sedan fem år tillbaka 1080p och accepterar sedan tre år tillbaka även 2K, 4K och till och med 8K. Filmtjänsten Netflix erbjuder i sin europeiska version de flesta filmerna i 1080p och planerar inom kort börja testa 4K.

Hårdvaran ute på marknaden har allmänt hunnit ikapp med 1080p. Mobiltelefoner och pekplattor som klarar ”Full HD”, eller 1080p, har på allvar etablerat sig under det gångna året.

Beträffande Ultra HD eller 4K har marknaden ännu en bit att gå. Datorskärmar och TV-apparater som klarar 4K har på allvar dykt upp först under år 2013. Priserna är ännu på en nivå som avskräcker de flesta konsumenterna. Samtidigt har konsumentinriktade videokameror som klarar 4K dykt upp på marknaden.

Högre videokvalitet kräver högre bandbredd. Nätuppkopplingar som klarar av att strömma video i 1080p är i dag vanliga i Finland. Uppkopplingar som klarar 4K börjar likaså bli allt mer tillgängliga. En uppkoppling på 10 Mbps (megabitar per sekund) räcker i allmänhet för 1080p, medan en uppkoppling på närmare 100 Mbps brukar rekommenderas för att 4K ska fungera problemfritt.

Videokvalitet handlar inte bara om resolution utan även bland annat om kompression, som för de högre resolutionerna på nätet ofta innebär ”H.264” eller ”AVC”. Den högre videokvaliteten kräver mera kraft av den hårdvara som ska ”packa upp” videoströmmarna, vilket ställer högre krav på bland annat processorer och grafikkort.

Var befinner sig då Yle Arenan då det gäller bildkvalitet? Arenan erbjuder i dag (december 2013) en resolution på maximalt 360p, vilket motsvarar 360 bildpunkter på höjden och 640 bildpunkter på bredden. Bandbredden är 654 kbps (kilobitar per sekund) för bilden och 96-128 kbps för ljudet.

Som jämförelse kan nämnas att den ”gamla vanliga” TV-resolutionen i Europa, införd på 1950-talet, är 720 gånger 576 bildpunkter (PAL).

Under år 2014 ska Yle Arenan enligt planerna börja ge tekniska möjligheter att erbjuda video i 1080p. Ändringarna går hand i hand med att Yle så småningom ska börja erbjuda alla sina TV-kanaler i 1080i.