In part 3 of my blog series on AngularJS migration, I go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.

Preparing your Application for Migration

Before beginning to migrate it’s necessary to prepare and align your AngularJS application with Angular. These preparation steps are all about making the code more decoupled, more maintainable, and better aligned with modern development tools.

The AngularJS Style Guide

Ensure that the current code base follows the AngularJS style guide Angular takes the best parts of AngularJS and leaves behind the not so great parts. If you build your AngularJS application in a structured way using best practices it will include the best parts and none of the bad parts making migration much easier.
The key concepts of the style guide are:

  1. One component per file. Structuring components in this way will make them easier to find and easier to migrate one at a time.
  2. Use a ‘folders by feature’ structure so that different parts of the application are in their own folders and NgModules.
  3. Use Component Directives. In Angular applications are built from components an equivalent in AngularJS are Component Directives with specific attributes set, namely:
    • restrict: ‘E’. Components are usually used as elements.
    • scope: {} – an isolate scope. In Angular, components are always isolated from their surroundings, and you should do this in AngularJS too.
    • bindToController: {}. Component inputs and outputs should be bound to the controller instead of using the $scope.
    • controller and controllerAs. Components have their own controllers.
    • template or templateUrl. Components have their own templates.
  4. Use a module loader like SystemJS or Webpack to import all of the components in the application rather than writing individual imports in <script> tags. This makes managing your components easier and also allows you to bundle up the application for deployment.

Migrating to Typescript

The style guide also suggests migrating to TypeScript before moving to Angular however this can also be done as you migrate each component. Information on the recommended approach can be found at however my recommendation would be to leave any migration to Typescript until you begin to migrate the AngularJS components.

Hybrid Routers

Angular Router

Angular has a new router that replaces the one in AngularJS. Both routers can’t be used at the same time but the AngularJS router can serve Angular components while you do the migration.
In order to switch to the new built-in Angular router, you must first convert all your AngularJS components to Angular. Once this is done you can switch over to the Angular router even though the application is still hosted as an AngularJS application.
In order to bring in the Angular router, you need to create a new top-level component that has the <router-outlet></router-outlet> component in it’s template. The upgrade guide has steps to take you through this process

Angular-UI Router

UI-Router has a hybrid version that serves both AngularJS and Angular components. While migrating to Angular this hybrid version needs to be until all components and services are migrated then the new UI-Router for Angular can be used instead.
To use the hybrid version you will first need to remove angular-ui-router (or @uirouter/angularjs)from the applications package.json and add @uirouter/angular-hybrid instead.
The next step is to add the ui.router.upgrade module to your AngularJS applications dependencies:
let ng1module = angular.module(‘myApp’, [‘ui.router’, ‘ui.router.upgrade’]);
There are some specific bootstrapping requirements to initialise the UI Hybrid Router step by step instructions are documented in the repository’s wiki


Bootstrapping a Hybrid Application

In order to run AngularJS and Angular simultaneously, you need to bootstrap both versions manually. If you have automatically bootstrapped your AngularJS application using the ng-app directive then delete all references to it in the HTML template. If you are doing this in preparation for migration then manually bootstrap the AngularJS application using the angular.bootstrap function.
When bootstrapping a hybrid application you first need to bootstrap Angular and then use the upgradeModule to bootstrap AngularJS. In order to do this, you need to create an Angular application to begin migrating to! There are a number of ways to do this, the official upgrade guide suggests using the Angular Quick Start Project however you could also use the Angular CLI. If you don’t know anything about Angular versions 2 and above now is the time to get familiar with the new framework you’ll be migrating to.
Now you should have a manually bootstrapped AngularJS version and a non-bootstrapped Angular version of your application. The next step is to install the @angular/upgrade package so you can bootstrap both versions.
Run npm install @angular/upgrade –save. Create a new root module in your Angular application called app.module.ts and import the upgrade package.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { UpgradeModule } from '@angular/upgrade/static';
 imports: [
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['angularJSapp'], { strictDi: true });

This new app module is used to bootstrap the AngularJS application, replace “angularJSapp” with the name of your AngularJS application.
Finally, update the Angular entry file (usually app.maint.ts) to bootstrap the app.module we’ve just created.
That’s it! You are now running a hybrid application. The next step is to begin converting your AngularJS Directives and Services to Angular versions. The Google walkthrough that these steps are based on can be found at

Doing the Migration

Using Angular Components from AngularJS Code

If you are following the Horizontal Slicing method of migration mentioned earlier then you will need to use newly migrated Angular components in the AngularJS version of the application. The following examples are adapted from the official upgrade documentation for more detailed examples see
AngularJS to Angular
Below is a simple Angular component:

import { Component } from '@angular/core';
 selector: 'hero-detail',
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
export class HeroDetailComponent { }

To use this in AngularJS you will first need to downgrade it using the downgradeComponent function in the upgrade package we imported earlier. This will create an AngularJS directive that can then be used in the AngularJS application.

import { HeroDetailComponent } from './hero-detail.component';
/* . . . */
import { downgradeComponent } from '@angular/upgrade/static';
angular.module('heroApp', [])
   downgradeComponent({ component: HeroDetailComponent }) as angular.IDirectiveFactory

The Angular component still needs to be added to the declarations in the AppModule. Because this component is being used from the AngularJS module and is an entry point into the Angular application, you must add it to the entryComponents for the NgModule.

import { HeroDetailComponent } from './hero-detail.component';
 imports: [
 declarations: [
 entryComponents: [
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });

You can now use the heroDetail directive in any of the AngularJS templates.

Using AngularJS Component Directives from Angular Code

In most cases, you will need to use Angular components in the AngularJS application however the reverse is still possible.
AngularJS to Angular
If your components follow the component directive style described in the AngularJS style guide then it’s possible to upgrade simple components. Take the following basic component directive:

export const heroDetail = {
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
 controller: function() {

This component can be upgraded by modifying it to extend the UpgradeComponent.

import { Directive, ElementRef, Injector, SimpleChanges } from '@angular/core';
import { UpgradeComponent } from '@angular/upgrade/static';
 selector: 'hero-detail'
export class HeroDetailDirective extends UpgradeComponent {
 constructor(elementRef: ElementRef, injector: Injector) {
   super('heroDetail', elementRef, injector);

Now you have an Angular component based on your AngularJS component directive that can be used in your Angular application. To include it simply add it to the declarations array in app.module.ts.

 imports: [
 declarations: [
/* . . . */
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });

Migrating your component directives and services should now be relatively straightforward a detailed example of migrating the Angular Phone Catalogue example, which includes examples of transclusion, can be found at
For the most part, if the AngularJS style guide has been followed then the change from component directives to components should simply be a syntax change as no internal logic should need to change. That said there are some services that are not available in Angular and so alternatives need to be found. Below is a list of some common issues that I’ve experienced when migrating AngularJS projects.

Removing $rootScope

Since $rootScope is not available in Angular, all references to it must be removed from the application. Below are solutions to most scenarios of $rootScope being used:

Removing $compile

Like $rootScope, $compile is not available in Angular so all references to it must be removed from the application. Below are solutions to most scenarios of $compile being used:

  • The DomSanitizer module from ‘@angular/platform-browser’ can be used to replace $compileProvider.aHrefSanitizationWhitelist
  • $compileProvider.preAssignBindingsEnabled(true) this function is now deprecated. Components requiring bindings to be available in the constructor should be rewritten to only require bindings to be available in $onInit()
  • Replace the need for $compile(element)($scope); by utilising the Dynamic Component Loader
  • Components will need to be re written to remove $element.replaceWith().


In this 3 part blog, we’ve covered the reasons for migrating, the current AngularJS landscape, migration tips and resources, methods for migration, preparing for a migration, different ways of using migrated components and common architectural changes.
The goal of this blog series was to give a comprehensive guide to anyone considering migrating from AngularJS to Angular based on my experience. Hopefully, we’ve achieved this and if your problems haven’t been addressed directly in the blog the links have pointed you in the right direction. If you have any questions please post them in the comments.
AngularJS migration is not an easy task but it’s not impossible! Good preparation and planning are key and hopefully, this blog series will help you on your way.


You can read part 1 of this series here:
And you can read part 2 here:

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring is a large topic and so needs a large introduction.
Essentially, I’m trying to explore what, as designers, we can do to save the planet. We should all know the consequences of climate change by now but to remind you, here are a few headlines from this year:

Pretty grim for sure, and I wish this was the media exaggerating as usual. Unfortunately, there is no sugar coating these facts. I have experienced the changes in the ocean and coral first hand. I have been to places twice over two years, only to find that every piece of coral has bleached and fish disappeared. There are very important yet simple steps that anyone can take to have a positive impact on the environment. Some of these are:

  1. Don’t buy things unless you need them. More on minimalism here.
  2. Cut out meat from your diet, or at least reduce it. This is probably the biggest one. Take a look.
  3. Cycle instead of driving a car.
  4. Go for a smaller living space. Some are even considering The Tiny House Movement.

The list can go on forever, but I advise you to go through a more serious list on the WWF website. (hint: it requires contacting political decision-makers to make larger green decisions.)

The next question is:
What can I do as a designer to prevent, or at least slow down climate change?

There are at least 4 things we can start with.

1. The internet

“While we might not be able to stop using the web, we can change how we build and power it, to make it planet friendly as well as user-friendly.” –Planet Friendly Web Guide

The web is the largest machine ever built in the history of mankind with 4.5 Billion users. The scary part is that it’s less than 50 years old. It’s complex, with many moving components. Simply put, however, more data requires more energy. In fact, in 2019, it will emit around 1 billion tons of CO2, that is more than the entire aviation industry.
The internet does pollute, nevertheless, the whole industry as a whole is conscious and is willing to move into a more positive direction. Yet again, start locally, see if you or your client are using green hosting and have optimised your website to be eco-friendly on EcoGrader.
Usually, cloud solutions are more sustainable than traditional data centres. Keep in mind, not all clouds are vaped-out equal. Google, for example, committed to being 100 per cent renewable-powered. Amazon, on the other hand, has only started to take its first steps in the right direction. Click-Green can give you a good insight into your favourite brands and their greenness.
Since we covered most facts and theories about internet pollution, let’s take initiative by taking a deeper dive into Planet Friendly Web. A great open source knowledge library, equipping you with the knowledge you need to make conscious and informed future decisions.

2. Start small, Start in the office.

Digital design is surprisingly taxing on the environment. Sharpies, post-its, laptops, phones, Ipads, pens and lots of coffee. Luckily, we can offset many of those quite easily. I found this simple spreadsheet to calculate your CO2 emissions.
The largest savings can be made on travel, which unfortunately isn’t as easy as cutting back on paper. Clients still need to be met, and users have to be interviewed. Fortunately, most progressive companies are open to remote work. Tools are developing at a rapid pace that makes collaborative design, research and interviewing remotely almost seamless. At Gofore, we use tools such as InVision, Axure and Abstract to allow for remote and asynchronously reviews and version control.
Finally, I have been conducting experiments holding large workshops using tools such as Mural and Mira. They are not flawless, but great, considering that they are in their infancy stages. You are a UX designer with an analytical mind, so you look at the numbers and see that every action you save on doesn’t actually make a difference. Let’s have a look at a global scale. According to LinkedIn, there are about one million UX designers worldwide, that is excluding service, UI, visual and graphic designer. With ease we can save about 3 million tons of CO2, that is more than closing entire coal-fired power-station. Designers are smart and creative, and our power should definitely not be underestimated. Every small action counts. Saving on paper, less travel and black coffee instead of a latte makes a big difference.

3. Greenify your products

Let’s pay attention to where our products meet the users. We as designers are partly responsible for that. The great news is that most of the users are concerned about the environment and are willing to make a positive change. Although, their motivation gets trumped by daily life, old habits and convenience. What we can do as designers is give means and motivation to pull(not push) their actions into positives ones towards the environments. We can ask some questions to determine if we are on the right track:

  • Is this helping or harming the environment? (a simple question that might require extensive research)
  • What are we making?
  • If what we are making has no sustainable alternative, how can we create the greenest experience with what we have?

Sometimes, solutions are easy and straightforward. When things are physical such as shipping and delivery, we can promote greener delivery options. Holiday booking apps/sites can offer means to offset the users’ carbon footprint etc. Most of what we design on the other hand is intangible and that makes it sustainable design thinking harder. Luckily with a designer’s brain, nothing is impossible
Asking users to go green is tricky. By asking users to go green, you are most likely asking them to sacrifice time, convenience or money. Design With Intent gives a good set of persuasion methods, I don’t agree with all of them as some contain dark patterns. If deemed effective however in actually saving our planet, I would explore “green patterns”. For example, choosing the more environmentally option by default for the user.

4. Size does matter, the smaller the better.

We are seeing banners growing larger, hero images taking the entire screens, video for visual appeal with no other function and more. This is harmful in some ways, starting from the digital divide. We are designing products to be used worldwide. Part of our design duty is to care about the user experience (people) worldwide, users in Bangalore with 0.5mb/s internet speed should have the same experience as the people in Helsinki. As from the planet’s point of view, more data means more fossil fuels. Websites are only increasing in size. From 500kb in 2011 to 4000kb in 2019. The majority of this lies with us, the designers. Modern sites and applications use web fonts, multiple JS libraries, high-resolution images, and videos, maybe adding a better visual experience, but definitely a more data-intensive and thus harmful experience for the planet.
Let’s not rely on internet speeds getting higher and work on making our products more lightweight. Also, research has shown that a faster user experience correlates with better conversions and higher user satisfaction. Take a look at the winners of the 10k Apart competition. They have achieved interactive and excellent designs whilst using 1/300th of the data of an average page.
Great tools for optimising your designs are, and Performance Budget.

In conclusion

  • Raise awareness through design, educate and guide your users on making the right choices.
  • Data over everything. Deconstruct data to find what the right solution is, i.e. beware of greenwashing!
  • Shift to green hosting and you could save thousands of KGs in CO2. (Again, beware of Greenwashing)
  • Starting with ourselves, as people first, and as designers second.
  • Optimising design and code across all mediums.

We are experiencing a revolution in design, it’s being discussed in relation to the world’s largest topics such as diversity and inclusion, digital divide and politics. Sustainability should become an inseparable part of that conversation. This blog merely scratches the surface, and I would be grateful to hear your other ideas on how we can sustain this planet for our children.

Anmar Matrood

Anmar is a designer with a strong background in UX and visual design. His passion is to simplify complex UX problems and his goal is to make intricate information accessible to the masses. Anmar is also an avid freediver, photographer, traveller and researcher.

Linkedin profile

Do you know a perfect match? Sharing is caring

Gofore + Shadeshares Campaign

Changing the world for the better isn’t just a state of mind, it’s about actions. We want all our actions to have a positive impact. Every year we choose collaborations that have that impact.
We have collaborated with Shadeshares, a company that puts their heart and soul, and expertise into ‘making good’. Shadeshare makes beautiful wooden sunglasses that aim to have a bigger positive impact than just sun protection.
The company was established by Finnish technology entrepreneur, Jan-Erik Westergård, and his Kenyan wife Veronica who strived to improve the lives in Kenyan slums by education and work. 30% of the sunglasses’ sales price is channeled to do good: 20% of every pair sold is directly spent on supporting the professional education and employment of young people living in the Kenia and 10% of the sunglasses’ sale price is donated to Finnish charities. We chose to support Icehearts. Icehearts is an organization that prevents young people’s social exclusion, enhances social skills and promotes well-being using team sports. They also provide consistent long-term support for vulnerable children in Finland.

Sharing good impact with wooden Shadeshare & Gofore collaboration sunglasses

We want to give away 20 pairs of wooden, hand-made, Shadeshare & Gofore collaboration sunglasses. Find our the campaign post on Gofore’s Instagram and tell your story about a positive moment in the comment section. You can give praise to your workmate that smiled at you, to the person making you a fresh morning coffee or to yourself for doing something great. We want to hear all impactful stories however big and small.
The campaign time is from June 19th to July 4th, after that we will randomly pic the winners and send direct messages to the lucky ones.


Campaign information

All glasses are handmade and different. The shades are made from lacquer coated wood from the outside and painted blue in the inside. Brown lenses are polarized and UV protected. Glasses retail prize is 86,80€ (including VAT 24%). You can choose from two different shapes, Rouvali or Eppu. The prize includes one pair of sunglasses shipped to receiver’s home address in Finland.
If you decide to participate in the campaign, we will collect your user name and name in order to carry out the lottery and contact the winners. We will process the data confidentially and we will not disclose the data to any third party. After the campaign, we will delete the data in a data secure manner. You can find more information, how Gofore processes personal information at
Campaign time is 19.6.–4.7.2019. The campaign will run in Gofore’s Instagram. Winners will be announced on July 4th via direct message. Gofore won’t publish the names of the winners publicly. You can participate in the campaign by commenting on a story about a positive impact. Participation is open to all people who live in Finland and are over 12 years old. Instagram has no part in the lottery. Gofore pays the lottery tax. All rights reserved.
#gofore #shadeshares

Gofore Oyj

Do you know a perfect match? Sharing is caring

In part 2 of my blog series on AngularJS migration, I’ll discuss the different methods for migrating an application and highlight the tools and resources that make it possible.

Tools and Resources

ngMigration Assistant

In August 2018 Elana Olson from the Angular Developer Relations team at Google announced the launch of the ngMigration-Assistant. When run this command line tool will analyse a code base and produce statistics on code complexity, size and patterns used in an app. The ngMigration Assistant will then offer advice on a migration path and preparation steps to take before beginning the migration.
The goal of the ngMigration Assistant is to supply simple, clear, and constructive guidance on how to migrate an application. Here is some example output from the tool:

Complexity: 86 controllers, 57 AngularJS components, 438 JavaScript files, and 0 Typescript files.
  * App size: 151998 lines of code
  * File Count: 943 total files/folders, 691 relevant files/folders
  * AngularJS Patterns:  $rootScope, $compile, JavaScript,  .controller
Please follow these preparation steps in the files identified before migrating with ngUpgrade.
  * App contains $rootScope, please refactor rootScope into services.
  * App contains $compile, please rewrite compile to eliminate dynamic feature of templates.
  * App contains 438 JavaScript files that need to be converted to TypeScript.
      To learn more, visit
  * App contains 86 controllers that need to be converted to AngularJS components.
      To learn more, visit

The ngMigration Assistant tool is a great place to start when considering migrating an AngularJS project. The statistics and advice it gives will help quantify the effort the migration will take and can highlight particular patterns that will need to be addressed. Be warned that the tool doesn’t cover everything and there will be additional areas of the application external libraries and some logic for example that will need reworking during migration. It’s a good first step but not comprehensive.

ngMigration Forum

The ngMigration Forum gathers together resources, guides and tools for AngularJS migration. The forum allows developers to ask questions and get answers on their migration problems, it also collates all the common issues that occur during migration.

The Upgrade Guide

The Upgrade Guide contains a number of examples and walkthroughs on how to proceed with an AngularJS migration. Written by the Angular team the guide addresses the most common cases and has a complete example of migrating the Phone Catalogue example application.

Deciding How to Migrate

There are 3 major approaches to migrating an AngularJS application to Angular.

Complete Rewrite in Angular

The first decision to make when considering migrating your Angular application is whether you will do it incrementally or not. If you need to support an existing application or the application is too large to fully migrate in a reasonable timeframe then an incremental upgrade may be the only path open. However, if the application is small enough or if you are able to stop supporting the existing application or allocate enough resources then a complete rewrite is usually the most straightforward approach.
Migrate the whole application without supporting the AngularJS version:

  • You don’t have to worry about upgrading or downgrading components
  • No interoperability issues between AngularJS and Angular
  • Opportunity to refactor areas of the code
  • Can benefit from Angular features immediately


  • The application will be offline during the migration or you will need to copy the code base to a new repository
  • You don’t see the benefits until the whole application is migrated which could take some time depending on the overall size
  • Since you will not see the whole application running until the end of the migration you may discover issues as you build more features

Hybrid Applications


ngUpgrade is an Angular library that allows you to build a hybrid Angular application. The library can bootstrap an AngularJS application from an Angular application allowing you to mix AngularJS and Angular components inside the same application.
I will go into more detail on the ngUpgrade library in Part 3: Implementing the Migration but for now, it’s important to know that ngUpgrade allows you to upgrade AngularJS directives to run in Angular and downgrade Angular components to run in AngularJS.

Horizontal Slicing

When migrating using a Hybrid approach there are two methods that will gradually move your application from AngularJS to Angular. Each has its advantages and disadvantages which I’ll discuss next.
Horizontal Slicing is a term used to describe the method of migrating building block components first (low-level components like user inputs, date pickers etc) and then all components that use these components and so on until you have upgraded the entire component tree.
migration routes Image: Victor Savkin
The term references the way that components are migrated in slices cutting across the whole application.

  • The application can be upgraded without any downtime
  • Benefits are realised quickly as each component is migrated


  • It requires additional effort to upgrade and downgrade components

Vertical Slicing

Vertical Slicing describes the method of migrating each route or feature of the application at a time. Unlike horizontal slicing views won’t mix AngularJS and Angular components instead each view will consist entirely of components from one framework or the other. If services or components are shared across the application then they are duplicated for each version.
Image: Victor Savkin

  • The application can be upgraded while in production
  • Benefits are gained as each route is migrated
  • You don’t have to worry about compatibility between AngularJS and Angular components


  • It takes longer to migrate a route so benefits aren’t seen as quickly as horizontal slicing
  • Components and services may need to be duplicated if required by AngularJS and Angular versions

Effort Needed to Migrate

Which method you adopt depends entirely on your business objectives and size of the application. In most cases, I’ve found that the hybrid approach is required and more often than not I’ve used vertical slicing during the migration. Maintaining a single working application at all times has always been a priority in my experience. Since the applications have also been very large the cleanest way to organise the migration across multiple teams has been to split the application up either by feature or by route and migrate each one in turn.
The amount of effort required again depends on your particular circumstances (size of the code base, number of people etc). I’ve found that putting everyone to work on migration at once leads to confusion and in turn wasted effort. What I’ve found is that by having a small team begin the work, bootstrap the hybrid application and produce some migrated components and services the rest of the team spends left effort getting started and instead begins scaling out the migration.

Part 3: Implementing the Migration

In part 3 I’ll go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.


You can read part 3 of this series here:
You can read part 1 of this series here:

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring

First, what do we mean when we talk about maintenance?

We make a lot of custom-made solutions, systems, applications and services to our customers. These projects can last anywhere from a few weeks to a few years, but they do usually have a specific goal and an expiration date. However amazing the final product is, and however much we’ve learned from creating it, the product will only start bringing value to our customers after it’s gone live. This is when we enter the maintenance phase.
Software maintenance offers the customer technical support, incident management and bug fixes, plus change management and further development to their existing live product. We want to guarantee that our super amazing product keeps being super amazing and does not simply fall into decay after it’s gone live. This is a matter of pride to all of us: quality in Gofore’s project delivery even after the project has been delivered.

software maintenance meeting

How would you prepare for a marathon?

A software project typically has a beginning, includes various steps taken to create the desired product, and finally, it comes to an end. You might find yourself tempted to think that the end of the project signifies the end of the software company’s work. However, the final release of the development project is the starting gun for the software maintenance phase.
It is part of our expertise at Gofore that, at the very start of a project, we explain to the customer that we should be making plans for when the product goes live and what happens after that. You wouldn’t run a marathon without practising and training for it. The maintenance phase can last for years after the product has gone live! For example, projects usually have multiple waypoints or sprint goals during the development phase. Maintenance should be included in this thinking – not just as a single point or event to reach, but as a natural and continuous extension of the development work.

Have you ever felt like…?

Not to worry, we have a solution for you: a centralized service desk and organized software maintenance.
While we who work with software maintenance daily are very excited about our great services, the most common thing we’ve heard in the past is that “only creating new products and services through coding is fun and exciting,” while maintenance is sometimes seen as a boring routine or an ungrateful chore that no one wants to do.
If you view maintenance this way, you probably aren’t up to speed with the latest news from the world of Service Desks. Maintenance in the year 2019 looks very different from even just a few years back, and it keeps evolving at a fast pace. Robotics and automation already take care of those boring routine tasks. The first line no longer just escalates tickets or parrots, “Have you tried turning it off and on again?” Those days are history.
At Gofore, our Service Center consists of specialists who resolve the complex issues the customers couldn’t solve themselves. As all the products we create are custom-made, maintaining them requires deep understanding and knowledge of a multitude of systems, programming languages and infrastructures. Service management, i.e. the maintenance phase equivalent for project management, is also known to increase its importance in the next few years. Service management and software maintenance require more and more expertise and specialized people year after year.

Don’t just take our word for it…

Here are some thoughts from our developers:

“Maintenance tasks improve your problem-solving skills, out-of-the-box thinking, social skills, and increase familiarity with the architecture. Participating in software maintenance is beneficial to all developers.” – Antti Simonen
“Software maintenance offers unique insight into the application’s issues and gives you a chance to make the customer happy. The quality of the code is continuously improved by maintaining your own applications.” – Petri Sarasvirta
“Understanding the application from someone else’s perspective enables you to write code that can be maintained more easily. One of the best ways to gain a better grasp of the big picture is through software maintenance.” – Antti Peltola

Our biggest supporters are our customers. Every month the people who do software maintenance at Gofore receive 5-star reviews from our very happy customers!

Actionable steps to success!

Here are some things to consider if you are a software developer:

  • When you write code, write it for others, not yourself. To put it in another way: if you can’t read your own code without a 30-page manual right now, you can only imagine how impossible the task is for someone who has to find and fix a bug in it two years later.
  • Make sure your commits are sufficiently descriptive. As all Agile developers know, documentation should not be a forced burden – but it is necessary, nonetheless. Work smart, not hard, and make sure your code speaks for itself. Remember to also keep your software delivery mechanisms (CI/CD) and infrastructure (servers, firewalls, etc) sufficiently documented.
  • Have tests and monitoring in place for production. You are the expert on what needs to be monitored and how it should be done.

And some notes for the project managers in our midst:

  • Allow time for proper documentation and make sure your customer understands its value. It should be your ethical guideline that we cannot skip such an important part of the project’s delivery.
  • Make sure your project team tests and monitors things that are significant in terms of business value. You have a unique understanding of both what is important to the customer and what your team can deliver.
  • Start preparing for the production/maintenance handover well on time. The earlier you give your colleagues who work with maintenance a heads-up, the better they can help you make the transition as smooth as possible.

software maintenance meeting at GoforeValue for the customer

Continuous services guarantee that the custom-made system, application or service works as planned throughout its lifecycle. Stability and quality in continuous services are a matter of honour and pride to our service managers. We are seasoned professionals and know how to navigate and translate between the development team and the customer, making sure all parties understand each other.
We meet customer expectations by proactively offering new solutions and further development, keeping in mind improving the customer’s business. Continuous services free the customer’s resources from maintenance to their own business. Finally, the most important thing we offer is peace of mind – the customer simply raises a ticket describing their concerns, and our Service Center swiftly takes care of the rest.

What’s in it for me?

So, you might be wondering, “What’s in it for me?” To sum up, keeping the maintenance phase in mind has its benefits…

  • …for sales, longer lasting customer relationships
  • …for developers, doing maintenance makes you “harder, better, faster, stronger”
  • …for project managers, less stress about moving to production
  • …for our customers, stability, quality and peace of mind

Ella Lopperi

As Head of Continuous Services at Gofore, Ella is responsible for nurturing the expert community, as well as for operations and strategy. She values open communication, empathy and transparency, and believes these values are key to both great employee and customer experience. Outside of work, Ella can be found reading, playing videogames, singing, writing... or simply immersing herself in the wonders of the Universe.

Linkedin profile

Jenna Salo

Jenna works as the Continuous Services Lead and a Service Manager. Providing her customers with peace of mind is the guiding principle for Jenna's everyday work. Work culture is also dear to her heart. In her spare time, Jenna is the humble servant of two chihuahuas, and yankee cars and circle skirts light a fire in her soul.

Linkedin profileTwitter profile

Do you know a perfect match? Sharing is caring

Gofore has recently completed a number of large scale AngularJS migration projects. During these projects, we’ve gathered a lot of information on the whole Angular migration process from the motivation to migrate, down to the finer technical details of the migration itself.
The purpose of this blog is to catalogue this information and offer guidance to anyone that is considering migrating. Part 1 will focus on the reasons to migrate while part 2 will detail the tools and techniques available when doing the migration and the final part will focus on the migration itself in detail.

What do we mean by AngularJS and Angular?

As multiple versions of Angular were developed, differentiating between the two incompatible versions became more important. Blogs, projects and discussions had to establish which version of Angular they were compatible with.
To reduce confusion the Angular Team suggested a naming convention, AngularJS would refer to any 1.x version of Angular and these versions came before the major rewrite that resulted in Angular 2. Any version from 2.0 up would simply be referred to as Angular.

Why did Google decide to make such substantial changes to Angular?

As AngularJS grew in popularity and was being used for bigger and bigger applications, developers started to notice performance issues. In an interview in 2018 Stephen Fluin, Developer Advocate for Angular at Google looked back at the reasons why Angular was built and said:

“There were millions of AngularJS developers and millions of AngularJS apps. The cracks started showing. If you wrote an AngularJS app the wrong way and had thousands of things on the screen, it ended up getting very slow. The architecture was just not designed with this kind of large-scale usage in mind.”

As Google started to address the growing concerns in the development community they came to the realisation that revolutionary rather than evolutionary changes were needed, Stephen Fluin in the same article goes on to say:

“The Team realized that there wasn’t an easy path to make AngularJS what it needed to be. And that’s why Angular was born. We moved from, for example, a controller and template model into a more component-based model. We added a compilation step that solved whole categories of errors that people would make in AngularJS.”

Although Google continued to support AngularJS it was clear that the future would be focused on Angular and a concerted effort was made to encourage developers to move to the new platform.

What’s the Current AngularJS Landscape?

It’s difficult to quantify how many active AngularJS applications are currently in production however, in January 2018 Pete Bacon Darwin, AngularJS Lead Developer at Google stated:

“In October of 2017, the user base of Angular passed 1 million developers (based on 30 day users to our documentation), and became larger than the user base of AngularJS.”

From this, we can deduce that up until October 2017 AngularJS had a million active developers which would result in a lot of AngularJS applications. Pete Bacon Darwin goes on to say:

“We will release a couple more versions this summer that includes a few important features and fixes before we move into the mode of only making security and dependency related fixes, but that is all.”

Clearly Google’s goal is to move as many applications onto Angular as possible as they move AngularJS onto legacy support. For applications that are continuing to be developed perhaps this makes sense but what about legacy applications should they be migrated? Google’s current legacy support plan runs until June 30, 2021, after this point there is a risk that security and breaking issues will no longer be patched. Migration of legacy applications that will be used beyond this date should be considered.

Why Migrate?

Performance Increase

AngularJS can be an efficient framework for small applications but as projects develop and the number of scopes and bindings increases this has a significant impact on performance. In Angular 6 a new rendering engine was introduced which substantially decreased compilation time and bundle sizes while Web Workers and server-side rendering opens up the possibility for further significant performance boosts.


Angular is built in TypeScript a typed language that compiles to JavaScript. TypeScript significantly reduces runtime errors by identifying them at an early stage. When writing code identifying errors early can speed up development time and increase stability.

Mobile Support

Unlike AngularJS, Angular is built from the ground up to support development across platforms. Angular components can be reused across multiple environments reducing the amount of duplication needed to get applications running on mobile devices. This and a smaller overall memory footprint makes Angular run faster on mobile devices.

Tooling Support

The inclusion of the Angular CLI allows developers to build services, modules and components quickly by utilising templates. This frees developers up to focus on building or improving on new features rather than writing boilerplate code.


AngularJS provides a flexible way of building applications that can quickly become unwieldy if not supported by strict coding standards. Angular imposes a structured component-based architecture to applications making building and maintaining larger applications much easier.

Data Binding

2-way data binding in AngularJS was one of the primary causes of the slowdown in larger applications. The bigger the application the more checks had to be done in each digest cycle. Angular’s change detection strategy eliminates the need to check branches where no changes have occurred significantly reducing the checks made in each digest cycle.


Google announced in July 2018 that AngularJS would enter a 3-year long time support stage. Updates would only be made if one of the following scenarios came about:

  • A security flaw is detected in the 1.7.x branch of the framework
  • One of the major browsers releases a version that will cause current production applications using AngularJS 1.7.x to stop working
  • The jQuery library releases a version that will cause current production applications using AngularJS 1.7.x to stop working.

With support ending on June 30, 2021, there is a risk that security and breaking issues will no longer be patched.
Google strongly advise migrating and have moved to a structured timed release of updates for Angular with new versions being released every 6 months. Currently, Angular is at version 7 and Google has announced a roadmap to the end of 2019 with version 9. All major releases have at least 18 months of support and there are no plans for the kind of breaking changes that happened between AngularJS and Angular.

Part 2: Tools, Resources and Methods

In part 2 I’ll discuss the different methods for migrating an application and highlight the tools and resources that make it possible.


You can read part 2 of this series here:
You can read part 3 of this series here:

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring


In digital times, people sometimes forget that the analog world still provides more organic results. Even though devices like the Ipad Pro + Procreate /w Apple Pencil and Wacom Cintiqs are getting closer and closer to analog results – analog still rules. It’s interesting to follow digital attempts on watercolour, oil painting, paint splatter and smudging. They are close to analog but still not quite the same.

Analog vs. digital

When deciding on a workflow that is suitable for you, there are some key points to consider:

  • Vector output
  • Easy undoing
  • Easy moving of objects
  • Layers
  • Easy to achieve multiple almost similar versions of the same image
  • Version control (**)
  • Organic results
  • Unlimited tooling (***)
  • Analog, rough, tangible results
  • Freedom of creation
  • The loss of happy little accidents
  • Limited by tooling (apps, software, stylus,…)
  • Human-generated algorithms
  • The need for digitalization
  • Analog output (at first*)
  • 1 version to work on
  • Not-so-easy undoing****

* = can be converted to digital rather easily nowadays, like for example with Adobe Capture or Scan + Adobe Illustrator Live Trace
** = Git or a WYSIWYG choice of your selection (note: if you have no idea what is version control, now it’s the best time learn it!
*** = pen, paper, ink, brush, paint, markers, scissors, sponge, water, hands, dripping, tossing, smudging, printmaking, painting, cut-and-paste, ..etc.
**** = Eraser, white opaque colour, redoing, reworking, …etc.


Time equals money – so speed is essential.
The speed of creation is hard to compare. It really depends on the tools you are familiar with (and learn easily).
Consider creating an organic background both in digital and analog. The time spent on analog will always need additional time for digitization. Let’s take a closer look at this interesting topic.

Options for digitizing

Scanner + Trace in Adobe Illustrator (Desktop) – The Oldie

The internet is full of articles about this, read this for example
Basically, you scan your image and then trace it to vector (or use it as a raster version). Depending on the scanned original + your live trace skills you might end up with great results.
In the end, you will have a vector version. Some images are virtually impossible to vectorize. Line drawings are easy, for example, comics.

Adobe Capture (Mobile App) – Easy Vectorized Results

Okay, this is what I love. I literally do.
Adobe Capture is not a well-known app. It’s a free app by Adobe that allows you to digitize assets to Adobe Creative Cloud and then use them instantly on synced devices – WHOAAAH!
Adobe Capture has the following sections:

  • Materials – Easy creation of 3D materials etc, based on image/photo
  • Type – Recognise a typeface based on an image/photo
  • Shapes – Scan any raster shape into a vector from an image/photo – Wait, REALLY?!!!!
  • Colours – Take an image of anything and create a colour scheme – Huh!?
  • Patterns – Create a pattern from an image/photo, similar to a kaleidoscope
  • Brushes – Custom brushes from an image/photo

Instead of writing an extensive tutorial on the app, check this video by Adobe:

For a free app, it truly delivers something extraordinary. I use this app weekly with comic creation and similar tasks. Just take a photo in the analog world and use it instantly on digital as a vector – this is as good as it gets.

Examples & Conclusions

There ain’t exactly one do-it-all workflow for creating visual designs and drawings. A true master needs to understand the context and the end result. Each end-media has its own limitations that need to be taken into account when planning a workflow. 
Example 1: Praise IRL Label – digital + analog
When I was working on the Gofore Praise IRL label, I ended up mixing raster and vector graphics. (Gofore Praise IRL is one of our own Gofore beers)
Gofore IRL label
I have done print designs in the past, so I knew that I could make a raster graphics background by tossing paints in the analog world on a paper and scan the result. I just needed to be sure that the scanned end-result would have enough resolution (at least 300 DPI for the printed label). I post-processed the background in Photoshop and then placed it to an Adobe InDesign document, which was all vector on the other hand. TLDR: In the end result you cannot notice that the background image is not vector as well. It just looks all good, if you have enough DPI.
Example 2:  Craig Thompson’s workflow for comics
Craig Thompson is a famous do-it-all comic artist, who has released awesome graphic novels such as Habibi and autobiographical love-story called Blankets. A direct excerpt from his blog says:

I ] settled on a compromise common among comics pros – I pencil the pages digitally [ using a Cintiq ], then print out blue lines and ink on actual paper.
digital pencils
The advantage of digital penciling is I can see a chapter all at once (top right photo), cut&paste, zoom in close, edit on the fly, and work standing up (top left photo, avec Momo). But digital inking still looks too slick to me — I prefer the flawed & tangible qualities of fussy sable brushes on paper. Foot in both worlds!

This kind of unique combination comes from the perfect understanding of his own medium and workflow.


To achieve the best results, you will need to:

  • Understand your medium and it’s nuances
  • Understand the optimal workflow for you (time spent, skillset,…)
  • Understand the differences in vector/raster outputs
  • Understand your audience (what kind of looks do they like)
  • Benchmark other artists

Gofore Oyj

Do you know a perfect match? Sharing is caring

In the business of digital services, the customer is the boss, and the value the customer sees, feels, and perceives is the only metric that matters. But what does it take for a traditional product business to turn into a customer value-driven service provider?
Service Business
Bad services come into the bargain with products. Great services replace the need to buy them – or make them obsolete altogether.

When products no longer satisfy

Traditional B2B product companies are struggling to stay relevant. Research & Development pushes out new technology faster than ever, and every step of the manufacturing process to logistics has been fine-tuned to perfection to offer the best quality-price ratio to customers.
The products have never been better, yet the sales forecasts are looking grim: traditional businesses are losing more deals every year while those businesses that manage to listen and react on their customers’ needs the quickest, flourish. What’s going on?
traditional product business
Traditional product business relies on technical excellence. The customers are passive consumers of the product.

The day the customers changed their minds

Meet Maria, the plant manager at Industry Inc., a mid-sized manufacturing company.
On an average Monday, Maria continues working on the newest tender for the plant. With the help of her team, she’s already narrowed the options down to two. It’s time to make the final call.
Option 1 is a trusted, familiar dealer, offering an excellent product. History has shown that it’s a good choice, and the price is the same as always.
In Option 2, the provider is offering a partnership focusing on improving the plant’s process performance. The product itself is one component of the service and is included in the monthly invoice. With the service, the provider promises to remove a few of the manager’s, engineers’ and the operators’ manual tasks. “These sure are tedious and time-consuming tasks,” she notes. “I wouldn’t lose anything important and could focus my efforts into more strategic questions.” “Actually, with these key figures, we’d be able to plan our budgets six months in advance. That’d make our jobs a lot easier.”
Option 2 sounds like a better choice, but also more expensive. But after adding up the benefits, Maria comes to the conclusion that the true potential is in the long game.
“Maybe we should give it a try. If everything fails, we can always go back to the good old Option 1,” she decides.
And they never go back to Option 1.

Hard numbers drive the discussion, but perceived value tips the scales

In B2B, direct cost is always a major criterion in decision-making, but certainly not the only one. The technical excellence of the products is already taken for granted by the customers – in the end, what really makes the difference is the total value the customer feels like they are getting.
How much do you value removing the mundane tasks you dislike? What about being able to consult experts in decision-making when needed? Sharing the risks with someone? Or making you feel like a better professional at your job?
We all choose the best option for ourselves, and for the context in which we live. People value things differently, and these values are rarely visible to the naked eye.

Knowing the people behind the billing address

No business can afford to create something their customers don’t accept. But who exactly do we mean when we say ‘the customers’?
In the digital service business, customers are not passive entities that consume what the business offers – they are people with different roles, needs, and motivations of their own.
Find out who your customers are, and what they need and value. Focus your efforts on doing the most important bits for them. (it’s worth mentioning that customer needs and wants are two different things – customer-centricity doesn’t mean you have to say ‘yes’ to every customer wish!)
When true customer insight starts driving decision-making, customers become the new objective setters for your business. Those objectives start driving the technical excellence of your business. The whole current changes direction.
Digital service business
The digital service business relies on customer insight. Answering to the needs of the customers drives the objectives of the whole organization.

Getting out of the black box

Creating something new never comes without risk, but you can avoid flying blind.
You need to learn how to collaborate with your customers’ key people. You need to be able to test your assumptions and ideas with them before they’re built.
It takes courage to challenge your own assumptions and approach customers with hypothetical scenarios and prototypes. Nobody likes having their ideas shot down, but it’s always better to fail with a two-month-old concept than with a launched service that’s been developed over years.
“But can we do that? What will our customers think?”
You’ll be surprised how eagerly people reserve an hour of their time to help create something that will, in the end, make their own lives easier. They’ll appreciate the opportunity to get their voices heard and make a difference. If you’re well prepared and have the right professionals to do it, people will want to pitch in. Or, at the very least they’ll appreciate that they were asked.
With the resulting, rich insight, you’ll notice that it’s much easier to make decisions on where to head next, what to change, and what to keep.

Playing the long game

Services are never ready. The service provider’s responsibility does not end when the first version is out and the first deal is won. That’s where the real acid test begins.
With services, the strength of the customer relationship is put to a test every single day. Even the best customer insight stales with time: customer needs and priorities change, and the service needs to be able to evolve with them.
Meeting and exceeding customer expectations demands resources, commitment, and, most importantly, continuous development. But the work pays off with strong and honest relationships that last. Such relationships create long-term value for both parties.

Fortune favours those who are driven by customer value

In the game of customer experience, fortune favours those who truly know and collaborate with the people behind the customer accounts, anticipate and react quickly to changes in the environment, and keep delivering their promises day after day.
Find out who your customers are and figure out what they find meaningful. Be prepared to challenge your own assumptions. Find the courage to test concepts with your customers’ key roles before they are built.
It takes courage, a change of mindset, and realignment in the way the business operates internally. But at the turn of the tide, that’s the price of staying afloat.

Janne Palovuori

Janne designs services that bring genuine value to people and business. As a service designer, he observes services through a holistic focusing lens. Heavily on understanding the 'right why' to design for, Janne works for people, with people, meaning the bedrock of his line of work is qualitative research, facilitation and co-creative methods. Janne's superpower is creating visualizations that conceptualize ideas and make information comprehensible, engaging and worthwhile to target audiences, across different project phases.

Do you know a perfect match? Sharing is caring

Open source is everywhere in today’s software business. Open source is found in programming languages, operating systems, frameworks, databases, standards and even machine learning models. However, structures behind open source projects have changed.
Open source is an asset

Rise of free sharing

Open source software is software that anyone can inspect, modify and enhance. The concept of free sharing of technological information existed long before computers. However, the concrete open source movement took the place in the ’80s, when the GNU project started and the Free Software Foundation was founded.
At the beginning of the 21st century, open source technologies gained popularity in enterprise software development. Open source-based Java and JavaScript became mainstream programming languages and open source databases such as MySQL and PostgreSQL also achieved success. Frameworks and libraries such as Spring, Ruby on Rails and JQuery likewise mirrored that feat by eating away at more complicated commercial rivals. The most widely used open source project Linux even become a symbol of universal freedom and independence.
Almost all of the early years open source components were invented by individuals, communities or small or non-profit corporations. In that time big players such as IBM, Oracle and Microsoft were focusing on their own proprietary ecosystems. Early in the millennium, former Microsoft’s CEO Steve Ballmer even famously described the Linux open-source operating system as ‘communism’ and ‘a cancer’.

Clear benefits

Traditionally in the software business, the strategy has been to build and license technologies and sell them. The vendor lock-in approach where a customer is dependent on applications and the source code has been a typical asset for tech companies. However, the technology landscape was ready for contemporary players.
The next generation of tech giants such as Google, Facebook, Amazon and Netflix are closely linked to open-source projects and communities. For tech giants, open-source is a part of their technology strategy, not enemy territory. Here are a few reasons why open source is a smart move in the long term.
Implementing and maintaining numerous technologies is extremely costly. Lowering development costs by building open source components for free could have a huge impact on the company’s profitability. There is, for example, an estimate that Facebook’s Open Compute project has saved them $2 billion in data centre costs.
Hiring a good developer is never an easy task. Open source helps to mitigate this global challenge. First, it promotes company branding. A good company culture is a mechanism for attracting the right people and retaining its workers. Working closely with open source communities gives the impression of a transparent, generous and people-friendly company.
Secondly, developing relationships with an open source community results in a pipeline of developers who are familiar with the open source technology and are excited to help work on it. The community engages experts around the world who are interested in solving similar problems and developing exciting technologies. The more well-known the open source technology is, the more candidates are available for the company.
The best way to make sure the kitchen is clean is to keep it open. The same analogy works with software development. According to many researchers, open source code tends to be of better quality (i.e. fewer defects) than propriety code. A bigger number of contributing developers, peer pressure and larger variety are just a few reasons why. In the best case scenario, the open source component becomes an industry standard.

Side effects

Old tech giants changed their strategy as well. Oracle acquired Sun Microsystems which was the company behind Java. IBM followed the approach in 2018 when it took over the largest open source company Red Hat.
The biggest transformation happened to Microsoft. The company decided to open source the .NET platform and released Visual Studio IDE for free. In addition to this, Microsoft plans to ship a full Linux kernel directly in Windows. Nowadays Microsoft is one of the top corporate contributors to open source projects.
However, tech giants’ active role in open source doesn’t come without a price. A painful example is the popular front-end framework AngularJS, invented by Google. Google decided to re-write the new version of AngularJS and stop support of the older version. This upset a large number of organisations and developers because they needed to completely refactor their applications.

Love of power

The tech giants have realised that company value does not come from technologies, but instead from culture and capabilities. Open source projects fit perfectly with this mindset. In addition to this, open source adds a new tool for the tech giant’s engagement strategy.
The result is that a few tech giants contribute to the largest number of open source projects. The dream era, where open source projects were the way to express, learn and have fun, is over. Open source has turned into a business and its rules are defined by some whose intentions are ambiguous.

Examples of open source projects contributed to by tech giants

Airbnb – JavaScript style guides, Airflow
Amazon –Deep Learning model library DSSTNEC, Amazon Ion Java
Facebook – React, React Native, GraphQL, Open Compute Project, PyTorch
Google – Android, Angular, Kubernetes, Tensorflow, Go
IBM (Red Hat) – Fedora, CentOS, Apache Spark, Ansible
Microsoft – .Net, Visual studio Code, TypeScript, RxJS
Netflix – Chaos Monkey, Hystrix
Oracle – MySQL, JDK
Twitter – Twitter bootstrap, Aurora, Storm

Graphic design

Miia Ylinen


Juhana Huotarinen

Juhana Huotarinen is a lead consultant of software development at Gofore. Juhana’s background is in software engineering and lately, he has taken part in some of the biggest digitalisation endeavours in Finland. His blogs focus on current topics, involving agile transformation, software megatrends, and work culture. Juhana follows the ‘every business is a software business’ motto.

Do you know a perfect match? Sharing is caring

Recently, for about a year and a half, I was working as a developer on a bleeding edge, business changing, disruptive solutions project. I can not say much about the business or the customer itself, but I thought I would share some of my experiences on what and how we did things.

Our team consisted of a Scrum Master, a UI/UX designer and full-stack developers, but the whole project had multiple teams working across the globe towards common goals using a Scaled Agile Framework (SAFe). Our team’s primary focus was to implement the web UI and the higher layers of the backend stack. We also contributed to the overall design and helped with coordination between all the product owners and different teams.

One of the best things in the project was to learn and use a huge amount of different bleeding-edge open source technologies.


The key technologies for frontend development were React and Redux, in addition to the obvious HTML5, CSS3 and JavaScript ES6. With Redux, we used redux-saga for asynchronous side-effects and also some other supporting libraries such as redux-actions and reselect. CSS was written as part of the React components using styled-components. Building and bundling of the code was done using Webpack. We also had a great experience with Storybook as a means of supporting rapid development and easy documentation of UI components.

While microservices on the backend are becoming very common, this project also used micro-frontends. This approach is rarer, but the benefits are quite similar: Different teams are able to work on different parts of the frontend independently since they are loosely coupled. New micro-frontends can also be written in different languages and using different technologies. This way switching to anew technology does not require rewriting all the existing functionality. As a technology of choice for combining the micro-frontends, we started with Single-SPA, but later switched to an iframe based approach. Using iframes made development and testing easier and improved our capabilities for continuous deployment.

This second solution turned out to work quite nicely. The only big challenge was related to showing full-screen components, such as modal dialogs. The iframe of a micro-frontend can only render content within itself. So, when it needed to open a modal dialog, it had to send a cross-window message to the top-level window, which then was able to do the actual rendering correctly on top of everything.

For frontend unit tests, we used Jest, Enzyme and Storybook snapshots, while end-to-end testing was done with TestCafe. Once again, it was seen that end-to-end tests are tricky to write – and quite a burden to maintain. Thus, choosing their scope carefully to get the best cost-value ratio is important, no matter what the tool that is used. Nevertheless, we were quite happy with TestCafe compared to available alternatives.


The backend of the system as a whole was very complex. The dozens of microservices in the lower layers were mostly done with reactive Java and they utilized, for example, event sourcing architecture. On top of those, our team built around 10 microservices with Node.js. The communication between services was mostly based on RESTful APIs, which our services implemented with either Express or Koa. In many cases, also Apache Kafka was used to pass Avro-serialized messages between services in a more asynchronous and robust manner. To provide real-time updates to the UI, we of course also used websocket connections. We learned that in some cases those Kafka messaging based approaches may work very well. Still, there is definitely also a pitfall of over-engineering and over-complexity to be avoided.

In the persistence layer we started with CouchDB as a document database, but later on, preferred using PostgreSQL relational database in most of our cases. With the latter one, we used Knex for database queries and versioned migrations, and Objection for Object-Relational Mapping. For our use cases, we did not really need any of the benefits of a document database, especially since nowadays PostgreSQL also supports json data columns to provide flexibility to the standard relational data model when needed. On the other hand, the benefits of a relational database such as better support for transactions and data migrations were important for us.

Some essential parts of the backend infrastructure were also Kong as the API gateway and Keycloak as the authorization server. Implementing complex authorization flows with OAuth 2.0, OpenID Connect and User Managed Access (UMA 2.0) was one of our major tasks in the project. Another important architectural piece, which took most of our time in the latter stages of the project, was implementing support for Open Service Broker specification.

In the backend, we used Mocha framework for unit testing but usually preferred to write the assertions with Chai. Mocking the other components and API responses were covered with Sinon and Nock. Overall, our backend stack was a success and, at least for me, a pleasure to work with.


All the services in the project were containerized with Docker and for local development we used Docker-Compose. In production, the containers were running in OpenStack and orchestrated with Mesos and Marathon. Later on, we also started a journey in moving towards Kubernetes instead. For continuous integration and delivery, we used Gitlab CI/CD pipelines. I also liked our mandatory code reviews of every merge request. In addition to assuring the code quality, it was a very nice way to share knowledge and learn from others.

In a large scale project such as this, carefully implemented monitoring and alerting systems are, of course, essential. Different metrics were gathered to Prometheus from all the services and exposed through Grafana, while all the logs were made available in Kibana. We also used Jaeger as an implementation to OpenTracing API, which allowed us to easily trace how requests flowed between different services and what was the origin of any errors.

The main challenges were related to the fact that running such a huge project completely on a local workstation during development is impossible. We investigated a hybrid solution, where some of the services would run locally and some on a development cloud, but found no easy solution there. As the project and the number of micro-services continued to grow, we were getting close to a point where a better solution would have needed to be discovered. For the time being, we worked around the problem by just mocking some of the heavier low-level services and making sure our workstations had plenty of memory.

In summary, this was a fun and challenging project to work with. I’m sure everyone learned tons of new skills and gained a lot of confidence through this project. I want to send my biggest thanks to everyone involved!



Digital transformation calls for agile strategy and culture change
5G changes everything
Fast Data

Joosa Kurvinen

Joosa is an experienced fullstack software developer with a very broad skill set. He is always eager to learn and try out new things, regardless of whether that is with backend, frontend, devops or architecture. He has an agile mindset and always strives after clean and testable code. Joosa graduated from the University of Helsinki and wrote his master's thesis on AI and optimization algorithms.

Linkedin profile

Do you know a perfect match? Sharing is caring