Changing a Drupal pathauto alias pattern with working redirects

So we recently found out we’ve been a bit boneheaded with the pathauto alias to our Freebase taxonomy pages. The pattern was http://svenska.yle.fi/term/freebase/politics-1234, where 1234 was the term ID (tid) in Drupal.

This is a stupid alias pattern, since it makes our urls obfuscated – impossible to determine from the outside without access to our Drupal database. Also it feels like bad semantical form, because the url contains something that’s meaningless to most people. So we wanted to change the pathauto alias to use the unique Freebase ID instead.

Here it is important to remember that the best looking url naturally would be just http://svenska.yle.fi/term/freebase/politics, but because of disambiguation (dealing with the case where a word can have many different meanings, such as ”mercury”, which can be an element, a Roman god, a planet and many other things) we want to guarantee that a url is unique.

If we look up the word politics on Freebase, we find that its unique Freebase ID is /m/05qt0 and so we would like our url to have the form http://svenska.yle.fi/term/freebase/politics-m05qt0.

Using our own Freebase module (which may be released to Drupal.org at a later time) has put a field called ”field_freebase_freebaseid” under a taxonomy vocabulary called ”freebase”. This means we have access to the token [term:field-freebase-freebaseid] and that makes the whole pattern for Freebase taxonomy term listings the following:

term/freebase/[term:name]-[term:field-freebase-freebaseid]

The problem

The problem is that when we change the url alias pattern we want to leave the old alias intact and redirect from it to the new one. This functionality is built into the pathauto module: you can open up a taxonomy term for editing, save it and the new alias will be generated and the old one made into a redirect.

However, we have 6 000 Freebase terms and it would take a day to open up them all and save them to get the new alias with a redirect. It seems fortunate then, that we have a bulk update feature in the pathauto module. Bulk update generates aliases for all entities in a specific content type. Unfortunately, bulk update only works on entities, in our case taxonomy terms, that don’t yet have a pathauto alias. What you have to do is delete all current aliases and then start the bulkupdate, which will generate new aliases using the new pattern. If we start by deleting all current aliases, no redirects can be created! Here are some articles and threads discussing this very issue. Apparently it’s been a problem for around four years:

https://drupal.org/node/1661546

http://drupal.stackexchange.com/questions/98350/how-create-redirect-pattern-from-old-to-new-urls

Basically, if you’ve created thousands of pathauto aliases that have been indexed by Google and need to exist as redirects to the new alias, you’re out of luck! This seems like an incomprehensible oversight and part of me thinks I must’ve missed something, because this isn’t acceptable.

The solution

Searching the web has given us several ideas about how to deal with this issue, but most require some kind of manual hacking of the database, which doesn’t really sound like something we want to do.

Instead, we ended up writing a simple drush script that just loads all terms in a taxonomy vocabulary (”freebase” in our example, but the script could easily be modified to take a command line parameter). Writing the script took about a third of the time it took to write this blog text, so hopefully at least two other Drupal-users will find this beneficial.

I am assuming you are familiar with Drush scripts, but to briefly explain, assuming your module is named ”freebase”, you can just create a file called ”freebase.drush.inc” in the same folder and when you activate your module, your drush.inc file will be autoloaded as an available drush script.

The code

The definition of the Drush-command:

function ydd_api_drush_command() {
  $items = array();
  $items['yle-resave-freebase-terms'] = array(
    'description' => "Loads all Freebase terms and saves them to force Drupal to create new pathauto aliases.",
    'callback' => 'drush_fix_freebase',
    'aliases' => array('yrft'),
  );
  return $items;
}

The accompanying function that does the actual work:

function drush_fix_freebase() {
  // Find freebase vocabulary id.
  $fbvoc = taxonomy_vocabulary_machine_name_load('freebase');
  if (!empty($fbvoc)) {
    $vid = $fbvoc->vid;
    // Get a list of all terms in that vocabulary.
    $tree = taxonomy_get_tree($vid);
    $counter = 0;
    // Loop through all term ids, load and save.
    foreach ($tree as $t) {
      drush_print('Fixing '.$t->tid.' ['.++$counter.' / '.count($tree).']');
      // Load and save term to refresh its pathauto alias.
      $term = taxonomy_term_load($t->tid);
      taxonomy_term_save($term);
    }
  }
}

Finally, you need to run the script which will tell you how many terms have been processed out of how many. The command is:

drush yrft

After running this, it’s easy to verify that all terms have indeed received new aliases and a redirect from the old alias.

Cloudifying images on svenska.yle.fi

Yle’s internal Image Management System (IMS) was recently renewed. It was a big leap forward for the site svenska.yle.fi to move all its images to the cloud, not only for the storage but for all the transformations as well.

Background
IMS is a custom, PHP based solution for storing images in a central place. It supports uploading and cropping images as well as managing image information such as tags, alt text and copyright. Images may be searched by tags or upload date. The system is multilingual, currently supporting English, Finnish and Swedish.

IMS was born about 5 years ago in December 2008, when the first version of it was launched. It was a quite simple looking tool for uploading and searching images. The workflow was to upload an image, enter its information such as tags and copyright, select its crop area(s) and save it. Then an editor would select the image while writing an article in SYND, FYND or any of the older now migrated sites. The image id is saved in the article, and the image is displayed via IMS. This way the same image may be reused in multiple articles.

Different image sizes for each image are needed depending on where the images are displayed on the site. IMS had a custom made, JavaScript based cropping tool for selecting the crop area of the image. The alternatives were to use the same crop area for all different image sizes, or to crop each size separately. The result was that we had 10 image files stored per uploaded image: The original plus the cropped version in nine different sizes ranging between 640x360px and 60x60px. All of there were in ratio 16:9 with the exception of the last one which was 1:1.

Cloudification
Along with Yle’s new Images API, the new version of IMS serves images from a cloud service. All transformations are done on the fly by specifying parameters in the image URL. Therefore, no actual image crops are performed on our servers anymore, we only save the crop coordinates (as a string in our database) and relay them to the cloud.

IMS also supports choosing crop areas for different image ratios now, instead of for different sizes. Available ratios to choose from are 16:9, 2:3 and 1:1.

When uploading an image, the image is first uploaded locally to our server. It is given a public id (used as a resource identifier by the cloud service), which, along with other information related to the image, is saved to our database. After that we tell Images API where the image is located and what public id it has. The image is fetched from it location and pushed to the cloud service. Now we can start using our image directly from the cloud, and that is exactly what we do next.

Once an image has been uploaded, the user is redirected to the image editor view. Already here, the image shown is served from the cloud and scaled down to an appropriate size just by adding a scale parameter in the image URL. The user may now define how the image should be cropped in each available ratio, enter some tags, alt-text etc. For increased SEO, we actually save the given tags and copyright text into the image file itself, in its IPTC data. This means, however, that each time the values are changed, the image has to be sent to the cloud again, replacing the old one.

Drupal integration
We have a Drupal module integrating with IMS in order to fetch images from it. In the Drupal frontend we initially always render a 300px wide image in order to show the users some image almost instantly, even though it would be very blurry as it might be scaled up. When the page load is ready we have a JavaScript going through all images and swapping them to a bigger version.

In the old days when we had those 9 different sizes available, it was hardcoded in the script that which size should be used where on the site.

With the cloud service in use we are able to utilize its on-the-fly image manipulations. Now our script actually looks up the size of the containing element of the image (e.g. the parent of the img) and renders an image in exactly that size. This is done simply by changing the size parameters given in the image URL. This enables us to control how large images we are serving just by changing the element size in the stylesheets.

The difference for tablet/mobile when we can select the best possible size for any resolution (click to view bigger version)

Challenges
One of the most challenging things we encountered was the fact that many images are 640x360px in size. That is the original image size! So how to show images that small in articles where we wanted an 880px wide image? We add an upscale effect.

Using the cloud service’s image manipulations, we take the original image, scale it up to the desired size and blur it as much as possible. Let’s call this our canvas. Then we put the original image in its original size on the canvas that we just made. The result is that it looks like our image got blurred borders. The same kind of technique is used on TV when showing old, 4:3 clips in 16:9 format.

We ran into a few bugs in open-source libraries we used. We decided to ditch the custom crop tool and use open-source Jcrop library instead. There was an issue when using a fixed aspect ratio together with a minimum or maximum allowed height of the crop area. We fixed the bug in our GitHub fork and created a pull request to get the fix contributed.

Also when using the cloudinary_php, the PHP library for the cloud service, we noticed a flaw in the logic. When specifying an image to be cropped according to specific coordinates, zero values were not allowed. This prevented any crops to be made e.g. from the top left corner of an image (where both X and Y are 0). The bug was fixed in our fork and merged via our pull request into the library.

Migration
Another challenge was that we had over 160 000 images with a total file size of somewhere around 400GB. For all these, we needed to a) generate a public id, b) upload to the cloud and c) save the image version number, given from the cloud as a response to the upload, in our database.

Of course we had to do this programatically. With a quite simple script we uploaded a sample batch of images. The script read X amount of rows from the database and looped through them, processing one image at the time. The idea was good, but according to our calculations the migration would have taken about 29 days to finish.

We then thought of having multiple instances of the script running simultaneously to speed things up. Again, the idea was good, but we would run into some conflict issues when all the scripts would try to read and write against the same database, let along the same table in the database.

Our final solution was to utilize a message queue for the migration. We chose to use RabbitMQ as our queue, and implemented the Pekkis Queue library as an abstraction layer between the queue and our scripts.

This way we could enqueue all the images to be processed and simultaneously run multiple instances of our script and be sure that each image was processed only once. The migration took all in all about 20 hours.

Written by Rasmus Werling
Rasmus “Rade” Werling has worked with Drupal development for 5 years. He’s speciality’s are backend coding and coming up with creative solutions to problems. He has contributed Drupal modules of his own and loves to take on challenges.

Two teams, two sites – one year with a shared Drupal distro at Yle

Today one year ago FYND (Finnish Yles New Drupal) launched it’s first site. Read about the work that went into the preparations here.

It has been a year of shared development when we really have seen the advantages of working on the same distro, and working with open source.

One of the first things we decided on was to work as two teams in a similar way to how NBC Universal works with their Drupal sites (or cells if you want to compare to how Supercell works).

”Clash of Clans and Hay Day were each developed by teams of just five developers. Even now they are global smashes each has a team no bigger than 15.” Ilkka Paananen in Wired Magazine

Is Five the Optimal Team Size? …the cost per function point of a team of size 7 was $566 and that of a team of size 14 was $2970 – Jeff Sutherland…If you have 3 team members, then you will have 4 communication channels, if you have 4 then you have 9. I think the formula is m-1^2. In my opinion, a small team of 4 or 5 is ideal. – PMHut

One idea that also was put forward was that we should work as one big team (quite common in big organisations I guess). We did however pursue the idea of having two teams, since we assumed this would make us more focused and productive. It would also allow us to better understand the business goals and needs if we got them independently from two sources (Svenska Yle and Yle Luovat sisällöt).

Issues in YDD

One year on we are happy with the end results, as the distro has continued steadily on its development path.

The main advantages with two teams and two sites have been:

  • Twice the amount of tasks have gotten done – everyone benefits
  • Maintaing the distro core is a shared task
  • You focus on the task at hand – not a zillion tasks all over the place
  • When there has been a module problem, there has been a clone site to compare with
  • We can mitigate the risk when a new function or module is activated, as we can share the burden by activating it on one of the sites before activating it on the other
  • It has been possible to compare notes, and get an “external” view on different tasks
  • External human & tech resources can be shared between the teams
  • Cross reviewing (using pull requests)
  • Documentation and best practices are better maintained as you clearly see the need and gain. In our case two semi separated teams provide better quality assurance than before
    • One observation is that quality assurance has gone up. One theory we have discussed is that it is because the teams see the other team as an outside party. The team members become aware that someone else will suffer if they write bad code or do not test their code. Just like in open source development where many eyes check the code, we are doing the same on a low level.

Main disadvantages:

  • Time is needed for co-ordinating
  • Time is needed to separate the settings for the two sites, but this would be needed just by the fact that we are running sites with two different languages
  • Risk for conflicting business goals – but this has been kept at bay by clearly stating that the purpose of the system is to provide articles publishing according to subjects.
  • If you break something you will get blamed

Some of the functionality we have shared over the last year:

  • API connections
  • New version of the Image Management System
  • New meta data solutions for Finto and Freebase

The structure we made on the technical side has worked out nicely, and we have not run into any problems that we have been unable to solve.

We have also found that the business goals are quite similar, and it has not been a problem to agree on changes in the shared core functions.

There has also been spillover in other areas. The content teams can check out what the other teams are doing and get inspired to do the same in their own site.

Workflow

When there is a request for a new feature the product owner of svenska.yle.fi checks with the product owner at yle.fi/aihe (or vice versa) if they also are interested. If they are we will make it for both SYND and FYND (YDD wide). Depending on the type of issue and how important it is for the sites we will negotiate about which team builds it. Maintenance issues have been shared 50/50.

If it is not a shared goal it will be made for the site that requested it – or it might not get made at all. This is a good checking point, if the feature is not of interest in another site that has almost the same functionality it might not be such a crucial feature after all.

We have a shared issue backlog, but we separate the issues as for both SYND and FYND, or only one of them.

In one month we will celebrate two years with SYND. More about that later.

Improve workflow & UI by making a Entity Reference View with multiple fields

User story: Help editors pick the correct representation of an article when connecting it to be displayed in a new department.

Background: We noticed that editors where picking the wrong representation of an article as there can be many of them. The reason for this being that we let editors customize the representation of their article depending on the department where it is being used.

Solution: Change the Entity Reference (usually a simple entity selection field) to a Entity Reference View.

Step by step: 1. Go and create a new view, add a Entity Reference display, and add the fields (filters etc just like a regular view) you want to display to the editors. If you select more than one field you will need to specify the ”Search fields” (Format –> Settings). In our case we added the date and department title. We are also thinking of setting a sort criteria DESC as it is likely most editors are looking for recently published content.

2. Go to your content type, and edit the field that is a Entity Reference. Change the reference selection to a view, and then pick the view you created. In ”View used to select the entities” select the display you made.

Entity reference

3. This is the end result with some additional CSS work:

Styled entity reference

Just adding the fields would have improved the editorial workflow, but just a little bit of CSS helped to make it even more usable. I removed the default wrappers and added some custom CSS classes for the fields. This way I was able to adjust the styling of the title, date and department.

.reference-autocomplete {
  border-bottom: 1px solid #f1f1f1 !important;
  padding: 3px 2px;
}
.reference-autocomplete:hover{
  border-color: transparent !important;
}
.reference-autocomplete span.er-node-title {
  font-weight: bold;
}
.reference-autocomplete span.er-post-date {
  color: #555;
  font-size: 0.8em;
}
.reference-autocomplete:hover span.er-post-date {
  color: #f1f1f1;
}
.reference-autocomplete:hover span.er-promo-parent-subject-page, .reference-autocomplete:hover span.er-kicker, .reference-autocomplete:hover span.er-content-type {
  background-color: #f3f4ee;
  border-radius: 3px;
  color: #0072b9;
  padding: 1px 2px;
}
.reference-autocomplete span.er-promo-parent-subject-page, .reference-autocomplete span.er-kicker, .reference-autocomplete span.er-content-type{
  color: #555;
  font-size: 0.7em;
  font-weight: bold;
  text-transform: uppercase;
}

Decided to style all Entity References in the admin theme with a bit more padding and a border-bottom line as it improves readability.

When trying to inspect the autocomplete div I noticed it was quite difficult to grab it via the inspect element function, but grabbing it via ”Copy as HTML” worked. This is what the basic markup looks like on a regular Entity Reference Simple field.

<div id="autocomplete"><ul><li><div><div class="reference-autocomplete">Österbotten</div></div></li><li><div><div class="reference-autocomplete">Kontakta Yle Österbotten</div></div></li><li><div><div class="reference-autocomplete">Lyssna på Radio Vega Österbotten!</div></div></li><li><div><div class="reference-autocomplete">Valet i Österbotten</div></div></li></ul></div>

The class-name of the line you hover on is named ”selected”.

Thanks to Olli Vesslin.

Replacing Drupal search with SOLR

There has been a need to replace Drupal’s core search with Apache SOLR in Svenska YLE for a quite some time. Before I could begin implementation, we needed to decide which Drupal modules we would select to handle the issue. There were really only two options: the Apache SOLR and Search API modules. Search API was already familiar to us and there was better Views support for our purposes, making it the obvious choice from the very beginning. At this point, we haven’t even done any actual comparison between Search API and Apache SOLR.

We already had an Apache SOLR test environment on YLE’s internal network, so we only needed to discuss how to work with the Apache SOLR service on the local environments of the developers. We could either use a local virtual SOLR environment (e.g. VirtualBox) or we could use an external service that could be accessed from anywhere. Using SOLR service within YLE’s internal network was out of the question because the development environment service needs to be functional outside of YLE’s network.

We investigated some of the external SOLR services available, but we finally chose to use local virtual SOLR environments. The main problem with this was how to ensure that all developers would have the exactly the same development environment and how to ensure that the development environment would be similar to what exists in the production environment. After a few trials and errors, Vagrant-box gave us the solution to this problem. I will not go any further into the subject of Vagrant at this point, except to say that Vagrant is the perfect tool for managing environments.

Once the modules and environments were selected, the actual implementation work could begin. We were using SOLR 3.x in both production and test environments so I needed to get a similar environment set up in my local environment. I found a ready-made vagrant-solr-box on github, so I decided to try that first. The environment worked just fine, so I continued the implementation using that.

I installed the Search API and Search API SOLR modules and also the Search API SOLR Overrides module for overriding SOLR server settings in different environments. Configuring Search API in Drupal was already a familiar procedure to me, and everything proceeded very smoothly. I began by configuring the Search API SOLR server and index. I replaced the content listing pages with the help of the Search API Views module, and everything seemed to work nicely on my local environment. We were now ready to move everything to the test environment, where a “real” Apache SOLR environment was waiting for us. All we needed was a new SOLR core for our site.

As I mentioned, everything had proceeded reasonably well, so far, but in the test environment, we started to run into some problems. First, Drupal wasn’t able to connect to the Apache SOLR server. By adjusting the proxy settings, we were able to resolve this issue, but Search API still just wasn’t working with the multicore Apache SOLR on the test environment. Indexing was successful on our local virtual environments, but these had a single-core SOLR server. The configuration that had worked just fine on my local environment didn’t work at all in the test environment, even though both were using the same version of Apache SOLR.

To solve the problem, we started by installing Vanilla Drupal on the test environment with the same modules in use on the actual site. By doing this, we were able to exclude any possible problems that might be caused by our own installation profile and features. Search API was not indexing content on this new test site, either, so we decided to try upgrading SOLR. We upgraded SOLR from version 3.6 to 4.4, and at the same time we updated schema.xml to support the latest Search API and Apache SOLR modules. This resolved the problem, the test site was able to index content to SOLR, so we configured the actual site and indexing started working there, as well.

We were very relieved when this adventure was finally over. A task that initially had seemed easy didn’t turn out to be quite so easy after all, as these things usually go, but there is no greater joy than when everything works out in the end.

With the SOLR index we have been able to replace most of the taxonomy listing pages, and this has meant a reduction on the processor load (on the database server) – especially in views that have depth enabled. The next thing to looking into is to remove the standard Drupal search index, to get a smaller database.

Written by Ari Ruuska
Ari has worked with Drupal developing about 7 years. Most of that time as a consultant at YLE as Drupal Developer and architect. He has also managed Drupal projects and developers team.

DrupalCon 2013 Prag och gemenskapens styrka

Jag hade nöjet att delta i DrupalCon 2013 i Prag. DrupalCon är i grunden en teknisk konferens där alla utvecklare som jobbar med Drupal samlas ihop. Just nu ligger fokus kraftigt på följande version, Drupal 8. Det är stora förändringar på väg, vilket väcker både stora förhoppningar och farhågor inom Drupalcommunityt.

DrupalCon 2013 Prag gruppbild

DrupalCon 2013 Prag gruppbild. CC By-SA Michael Schmid http://www.flickr.com/photos/x-foto/

Mindre aktörer är oroliga över att de inte längre ensamma skall kunna hantera en Drupalsajt, att de blir för komplexa. En del användare börjar använda Drupal på nya sätt så att de skär av trådarna mellan backenden (den tekniska bakgrunden) och frontenden (det som användarna ser på sina skärmar). Andra vill fortsätta att utveckla Drupal 7 ännu, eftersom det är ett fungerande system. Och ytterligare andra kan inte vänta på att Drupal 8 skall lanseras för att kunna utnyttja de nyheter som är på kommande. Klart är dock att fler utvecklare redan nu har bidragit till utvecklingen av Drupal 8 än av någon annan tidigare version.

Och det är verkligen mycket nytt, Drupal 8 innehåller över 200 nya egenskaper. Dries Buytaert, ”Drupals pappa”, tog i sin keynote särskilt upp Drupals samhälleliga betydelse, vad systemet möjliggör som ett open source webbpubliceringssystem. I linje med det fokuserade Dries också mycket på egenskaper som tillgänglighet (accessability), mångspråkighet och mobilanpassning i sitt tal. Det var också glädjande för mig att höra att en stor vikt lagts på att semantisk annotering i Drupal, huvudsakligen i linje med definitionerna i schema.org (mer om vårt arbete kring detta i ett senare inlägg).

Andra dagens keynote hölls av Lisa Welchman som pratade om vikten i att vårda den community som byggts upp. Det finns en paradox för öppna gemenskaper, där just öppenheten blir den drivande faktorn för tillväxt och kreativ gemenskap, vilket väcker krav på styrning, vilket riskerar kväsa communityt. Det här är något som man sett inom Drupalgemenskapen där dessa DrupalCons och också mindre träffar har varit fantastiskt inspirerande. Och den hjälp som utvecklarna ger till varandra helt gratis och av pur välvilja är helt fantastik. Och tänkt ur ett mer krasst affärsperspektiv otroligt prisvärd. För det arbete som gemenskapen gör och ger varandra är värd miljontals euro!

Men samtidigt kommer den andra sidan in. Då en verksamhet växer så börjar den spreta åt olika håll och kan inte längre fungera effektivt utan styrning. Men där måste man gå försiktigt tillväga. Och även styrning är något man kan göra tillsammans. Och på så vis övervinna paradoxen mellan den inspirerande och tillväxtgivande friheten och den potentiellt kväsande styrningen.

En lärdom som vi gärna kunde överföra till fler verksamhetsområden, inte minst journalistiken.

Nedan en Storify som samlar mitt twitterperspekiv på DrupalCon Prag:

Switching Distros on a Running Site

As Mårten explained in an earlier post we decided to redo our installation profile into a more diversified model with a common core set of modules and with a differentiated set of modules for every site that will use the common core.

Splitting the previous Yle – profile that we used as a base for our installation from the beginning was easy enough. Just create two new installation profiles – Syndprofile and Fyndprofile that used most parts of the earlier Yleprofile and the parts that were specific for the two installations into their own sets of repositories, ie synd_modules and fynd_modules and so on. Starting development on the fynd-platform was also very successful and has since launced two successful projects: Kuningaskuluttaja and MOT.

However, on the svenska.yle.fi – platform, the change posed a lot of challenges to overcome, since we had to change installation profiles on a running site. And since a lot of the path settings to modules and themes are set in the database at installation time a lot of file paths would have to be changed. So when we first tried the easiest approach, just replacing the profiles/yleprofile installation folder with a profile/syndprofile we were left with a severely broken site, that couldn’t find any modules or themes, no matter how many times you tried to clear the cache. So it became clear that we had to do the changes directly into the production database.

So what we did was to dump the database into an sql file and doing a search and replace on every occurence of yleprofile to the new synprofile. We opted to slightly change the name to synprofile since there are a lot of serialized data that contains the path in the database as well.

We have around 500 000 nodes in our database at the moment, and a pretty large index, but using sed to do the search and replace the operation only took a few minutes to do, even though the sql file itself is getting close to 5 Gigabytes in size.

$ sed ‘s/yleprofile/synprofile/g’ < svenskaylefi.sql > svenskaylefi_synprofile.sql

This, however, was not enough in our case. We also had a couple of modules and our own theme settings and features that was not going to be used in the other distributions so we had to manually change the location of these. Fortunately most of the file path settings are stored in just a few tables.

system
registry and
registry_file

The paths are also stored in the cache tables so it is advisable to truncate them at the same time, especially

cache
cache_bootstrap and
cache_path

It took a few tries to get every step in this workflow to work out without problems, but when we finally figured out how to do it the migration process went pretty smoothly. Just one final hurdle gave us a bit of cold sweat when the site didn’t even bootstrap even though the migration process had otherwise gone smoothly. But, a manual flush of the memcache server solved that too.

 

Jag, Joakim

jj

Efter snart tre månader på Svenska Yles “utgivning webb” börjar det bli dags att presentera mig själv: Hej, jag heter Joakim, kallas för Jocke och jag är kodare.

Det råder ganska spridda åsikter om vad en kodare är och vem som får kalla sig kodare. Vissa purister tycker inte att jag får använda titeln, eftersom jag endast skriver program i språket PHP som är ett s.k scriptspråk och tolkas av datorn utan att programmeraren först behöver översätta det till maskinkod. Samma purister får hjärtslag när någon som “bara” arbetar med HTML kallar sig för kodare. Ovannämnda hör till mina huvudsakliga verktyg och för enkelhetens skull brukar jag ändå presentera mig som kodare.

Här är mitt försök att definiera kodande och samtidigt en beskrivning av vad jag gör här på YLE: En kodare löser logiska problem och skriver sedan ner lösningen på ett språk som han behärskar och som stöds av servern. Pluspoäng fås om man skriver ner lösningen på ett sådant sätt att också andra kodare förstår den. Vill man vara extra öppen, så kan man skriva koden så att hela världen kan ha nytta av den och till och med vidareutveckla den. Då måste man följa vissa riktlinjer och koda på en ganska allmän nivå i stället för att välja lösningar som endast kan utnyttjas av vår begränsade nisch. Jag gillar den sortens öppenhet och det var en glad överraskning då jag fick höra att man också på YLE värdesätter öppenhet och att det finns stora satsningar för att göra det vi håller på med mera öppet.

Min officiella titel är “webbutvecklare” och här på YLE ska jag i första hand arbeta med webbsajten svenska.yle.fi och specifikt som Drupalexpert.

Kanske man bättre förstår vad jag sysslar med om jag get ett konkret exempel: När YLEs redaktörer skriver webbartiklar, så ska varje artikel helst förses med nyckelord som är relevanta för det man skriver om. Orden kan inte väljas hur som helst, utan ska sökas från en ordlista, en s.k “ontologi”. En av mina första uppgifter var att se till att redaktörer kan tagga sina artiklar med ord också från andra ontologier än standardontologin, “koko”. Som konkret exempel kan man nämna “mesh”, som innehåller medicinska termer och ska användas av Webbdoktorns redaktörer. Som kodare tacklar man uppgiften steg för steg: Först måste jag se om jag över huvud taget kan söka mesh-termer från ONKIs tjänst. När jag kan göra det på teknisk nivå måste jag fundera hur en redaktör vid inmatning av en artikel lättast specificerar att nyckelord ska sökas från t.ex mesh. Det finns många alternativ: man kan välja från en s.k rullgardinsmeny, man kan kryssa för en eller flera ontologier med s.k checkboxar och så kan jag som kodare kräva att en redaktör vet att man ska fylla i ontologins namn före sökordet, t.ex “mesh:huvudvärk”. Det sistnämnda är ett bra alternativ om endast väldigt få ska använda andra ontologier. Vi vill gömma funktionen för dem som aldrig behöver den, för att inte verktyget ska bli svårare att använda.

Jag ska återgå till presentationen. Jag är i skrivande stund 37 år gammal och har jobbat som webbutvecklare och utbildare för samma företag i drygt 15 år före jag kom till YLE. Utbildningarna skedde till 90% i samarbete med Arcada och jag har stått tusentals timmar i klassrum både på Drumsö och i Arabia när Arcada flyttade dit. Nästan all utbildning har skett inom fortbildningen för vuxna elever och kurserna har handlat om allt från fotografering och Microsoft Excel till programmering i olika språk, som PHP, HTML (förlåt, purister), JavaScript och Flash Actionscript.

När jag kom till YLE hade jag inte speciellt djupa kunskaper om Drupal, eftersom jag i omkring 10 år arbetat med en konkurerande produkt, ett eget innehållshanteringssystem som gör ungefär samma saker som Drupal. Det var intressant att märka att många av de lösningar som finns i Drupal är saker som jag själv har funderat på och det var därför inte särskilt svårt att “komma in” i Drupals sätt att göra saker och reglerna för hur man skräddarsyr lösningar.

I mitt tidigare liv som utvecklare arbetade jag mycket och ofta med olika webbprojekt. Den stora skillnaden mellan mitt liv då och nu var att organisationen var så liten att jag ofta “fick” vara med och t.o.m ansvara för hela processen: Det första mötet med en ny kund, skrytande med tidigare projekt, genomgång av kundens specifikationer och önskemål, frågor om budget, planering av en offert som passar in i nyssnämnda, utarbetande av tidtabell, förverkligande av layout, html, javascript, css, php, databas, kommunikation med kunden per email, telefon och regelbundna möten under projektets lopp, registrerande av webbadress, konfigurering av server och e-post, testande, buggfixar, avslutande av projektet, klagomål… Exempel på projekt vi gjorde var webbshoppar, valmaskiner, auktionssystem, ekonomisystem, “vanliga” webbsidor och många helt skräddarsydda lösningar för vad kunden sade sig behöva.

Det som övergången till YLE har medfört är en möjlighet att få fokusera på färre saker och göra dem mera ordentligt. Jag var en sorts allt-i-allo och nu ser jag mig mera som webbutvecklare eller kodare som också får vara med och visionera och planera ibland, men som ändå huvudsakligen arbetar med ett ganska snävt område. Faran med att, som jag tidigare gjorde, jonglera så många uppgifter och t.o.m olika projekt samtidigt är att kvaliteten lätt blir lidande. Speciellt som man hela tiden begränsas av slutsumman på en offert som oftast ligger i den lägre ändan av vad som är mänskligt möjligt. Risken är också att man är för fokuserad på “the bottom line” och inte kan vara stolt över att man gjort nåt fint, utan endast då firman tjänat en massa pengar.

I mitt privata liv dras jag främst till olika kreativa hobbyer, som fotografering, chiliodling, matlagning, bakning, pianospel, hemrenovering. Jag bor i ett hundra år gammalt hus med många flagande ytor och spruckna väggar och försöker så småningom lära mig rätt sätt att ta hand om stället. Datorer ligger också ganska nära hjärtat och när min son nu nått den mogna åldern av tre år, så är det intressant att se hur han tar till sig teknologi. Det tog förvånansvärt länge för honom att förstå varför det ska finnas tv-program som kommer bara en gång och ett visst klockslag. Han är också ytterst förbryllad när man talar i telefon med nån utan att se personens videobild. I hemlighet avundas jag honom som fötts till en värld som är så mycket mera teknologiskt avancerad än min var på sjuttitalet. Jag tror det säger en del om hur jag är som människa och mitt motto eller min livsfilosofi kunde egentligen uttryckas som “visa mig flera coola saker!”. Det känns som att många här på YLE tänker i liknande banor, så jag kände mig både hemma och välkommen från början.