At Gofore we pride ourselves for having the best of the best when it comes to talent. Finding and securing the best people is not easy and we have to find creative and interesting ways to attract them. In an effort to attract a new crop of talented developers to our office in South Wales, UK, we recently invited 15 students from around the area to join us in the Swansea office for a hackathon event. The idea was to identify students with the potential to grow into tomorrows experts.
We devised a task to create an application that could control a DJI Ryze Tello drone with nothing but code written during the hackathon. The students could use any language or tools they wanted and the only firm requirement was that there was some element of user interaction involved, in other words, the ability for a non-coder to control the drone. The students then faced a series of races/challenges to test out the code they had written.
So with that in mind, I thought it was only fair to attempt some of the challenges myself. So in this blog, I’ll show you my take on getting a drone to fly with Node.js

Getting started

The first part, as with any project is getting it all set up. I decided to use JavaScript (Node.js) simply because I’m looking to get something up and running as quickly as possible. JavaScript is also the main language that I use at the moment in my day-to-day activities, so its fresh in my mind. Also if I decide to add some form of UI using web technologies such as React, Vue or Angular then this will be (slightly) easier with a full-stack JS application.
In theory, you can use any programming language for this, the one requirement is that the language somehow allows you to open up a datagram socket as the way we will communicate with the drone is via UDP.
We can connect to the drone directly via WiFi as the Tello drone has its own hot-spot built in. Once we are connected we can start to send commands to the drone. The drone has an SDK that we can use which allows us to send plaintext commands to it.
More information is available in the official SDK documentation
I won’t go into too much detail about how I set up my project as there’s nothing that special going on. This is a simple “vanilla” Node.js project with no dependencies other than some of the core modules that come with Node. That being said, here are the steps you will need to follow to get started on your own:
Create a new folder on your system for the project, I’ve chosen to name the project tello-ctrl but feel free to use anything you would like.

  • (optional) Initalize a new git repository (git init) and link to a repository on GitHub/BitBucket/GitLab
  • Inside the project folder run npm init -y 
this command will run the usual npm init command and will accept the default values that npm init provides.
  • Create a new source folder ‘src’ in the tello-ctrl folder, this is where all of our code will go. In order to do this, I’m going to make use of the ‘readline’ package that’s available as part of node. First we need to import the package (as I’m not using Babel or any other pre-processor here I’ll have to use the older require style syntax to do this) then once we have this imported, we can use the createInterface function which will take two streams as arguments; one stream to read from and one stream to write to. We can pass in the process.stdin and process.stdout streams for read and write respectively.

All of the code in this project is contained in my GitHub account
Once all that is done, you should have a directory structure that looks something like this (when viewed in vs code)
dji tello code

Preparing for take off

Code for this section can be found in the basicio branch.

Once the project setup is done its time to actually write some code. The first thing we need our app to do is to accept some basic inputs from the terminal, down the line this will allow us to send commands to the drone when we detect certain strings as input. For example, if the user enters “takeoff” we can, in turn, send a take off command to the drone.

const readline = require("readline");
const rl = readline.createInterface({ "input": process.stdin, "output": process.stdout});

Once that’s done we can add an event listener to rl for the “line” event that is fired. This event will be fired whenever a line is detected, which in our use case will be whenever the user hits the enter/return key after typing in a command. The event listener will take a function that it will fire whenever the line event is detected, it will pass the line that has been entered as the first argument to our function. For now, we will just pass the line that’s received straight over to console.log, This means that whenever a line is detected from the user, the line will be logged to the console.

console.log(`Lets get started!`);
console.log(`Please enter a command:`);
rl.on("line", line => console.log(line));

Once that’s done we can run our app for the first time. To do this head back to the terminal and then run:

> node src/app.js

You should notice that the lines “Lets get started!” And “Please enter a command:” are printed to the terminal. If you enter some text and then press enter/return you should see whatever text you entered repeated.
Now that we have some rudimentary IO set up we can start to look out for when the user enters specific commands, such as “takeoff”, “land”, “forward”, “back”, “left”, “right”. To do this we will add a new function “handleInput” which will take in the line passed to it by the event listener and will perform a simple switch statement on the content of the line. By doing this, depending on what the line received from the user contains we can then execute specific functionality in our app.

function handleInput(line) {
    switch (line) {
        case "takeoff":
            console.log("Detected takeoff command.");
        case "land":
            console.log("Detected land command.");

Once we’ve created our handleInput function, we can then pass the line we receive from our event listener to it like below:


If we run the application this time then only when the line is equal to either “takeoff” or “land” should we see something printed to the terminal.

dji tello code

Take Off & Landing

The code for this section is available in the basic-movement branch.

It’s now time to finally connect to our drone and send some of the most important commands to it. Take off and Land.
In order to send commands to the drone, as mentioned previously we will use the UDP protocol. In order to create a UDP socket in Node.js we need to make use of the “dgram” module, similarly to how we made use of the “readline” module earlier for our basic IO. The createSocket function that’s available as part of the “dgram” module can be used to create a socket which can be bound to a port of our choosing. Once we have a socket bound to a port, it can be configured to listen to incoming messages as well as to send outgoing messages for us.
When you connect to the Tello drone over Wi-Fi, it will be listening for command type messages on port 8889. It will assign its self the IP address on the network it hosts – we will need this information when creating our socket.
To keep things tidy we will create a function called “getSocket”, the function will create a socket using the “createSocket” function that’s imported from the “dgram” package and will then bind this socket to the Tello command port, which is 8889. It will then return to us this socket that is ready to be used for communication with the drone.

function getSocket() {
    const socket = createSocket("udp4");
    return socket;

Another thing we will do at this point is to wrap up some of our existing code in an Immediately Invoked Function Expression (IIFE). This will allow us to declare the function as async which will allow us to make use of Async/Await, enhancing the readability and maintainability of the code later on. To do this we will take everything other than the handleInput function and the require statements for “dgram” and “readline”, wrapping them in an IIFE as shown below.

(async function(){
    console.log(`Lets get started!`);
    console.log(`Please enter a command:`);
    rl.on("line", line => handleInput(line, server));

To learn more about what exactly an IIFE is and why they are useful in JavaScript, Kyle Simpson does a much better job explaining them than I could ever hope to do in his book “You don’t know JS: Up & Going” which is available free I’d highly recommend checking it out.
All we really need to know about IIFE’s for the purposes of this app is that the function will essentially be called as soon as it has been created. Our next step is to add the call to our new getSocket function within the IIFE that we just created.

const socket = getSocket();

The socket that we created emits some events that will be very useful for us when debugging and running our app, in order to make use of these events we need to register some event handlers.
The events that can be emitted from the socket that we care about are:
“message” – This event is fired when a message is received by the socket. An event handler can be provided that will be called with the message that has been received as its first argument, the message can then be used as desired by the developer, in our case we will just be logging messages to the terminal. The event handler is also called with another argument called rinfo, which contains information about where the message was received from.
“error” – This event is fired when an error occurs with the socket connection. An event handler can be provided that will be called with the error that occurred as its first argument, the error can then be inspected and used for logging and error handing purposes.
“listening” – This event is fired when the Socket has been created and is listening (ready to accept) incoming messages. An event handler can be provided, when it is called it is not passed any arguments.
The handlers that we are going to add to the above events are

socket.on("message", (msg) => {
    console.log(`Message from drone: ${msg.toString()}`);
socket.on("error", (err) => {
    console.log(`There was an error: ${err}`);
socket.on("listening", () => {
    console.log("Socket is listening");

In order to send a message, we will use the send() method thats available on our socket. The send method accepts 6 arguments:
“msg” – The message to send.
“offset” – The offset in the buffer where the message starts.
“length” – The number of bytes in the message.
“port” – The destination port, this is the port the message will be sent to.
“address” – The destination host name or ip address, this is where the message will be sent on the network.
“callback” – A callback function that is executed on completion of sending the message, the first argument can be an error, if the error is truthy this indicates that an error occurred and should be dealt with appropriately.
Some more info on this method is available here.

Get the drone into SDK mode

Before we can start sending meaningful commands to our drone, such as “takeoff and “land”, we need to get the drone into SDK mode. Once the drone is in SDK mode we can start to issue it with other commands, and it will (hopefully) respond to them. We can do this in the same way as we would send any other command to the drone, by using our socket’s send() method.
To do this, and to keep things nice and tidy we can create a new function called “sendInitCommand”. The function will take the socket created earlier as an argument and will then make use of the socket.send method to send the command over to the drone. The command that we will be sending is the string “command”.
As the socket.send method is async (It takes in a callback function that is executed on completion of the send operation) we will make our sendInitCommand function return a new Promise. Using promises will allow us to use the async/await syntax that we mentioned earlier, and will make our code easier to read and maintain.
The callback function that we will provide to socket.send will take one parameter, and will follow the standard “error-first” convention with node, this means that any argument passed in as the first argument will be an error object. We can perform a quick check on the value of the argument and if the value is ‘truthy’ this means an error occurred while sending the message, conversely if the value is ‘falsy’ then this means there was no error and everything went as expected (The command was sent to the drone successfully).
In the event that the error is ‘falsy’ (No error occurred), we can safely resolve our promise using the resolve function.
As we are using async/await, if the error is ‘truthy’ (Something went wrong) we will just throw the error and handle it later on rather than rejecting our promise as we normally would.

function sendInitCommand(socket) {
    return new Promise((resolve) => {
        socket.send("command",0,"command".length,TELLO_CMD_PORT, TELLO_HOST, err => {
            if(err) {
                throw err;
            else {
                return resolve();

The next step is to actually call the function that we just created. A good place to add this call is under where we added the “listening” event handler.
After all that, our IIFE should now look something like this:

(async function(){
    console.log(`Lets get started!`);
    const socket = getSocket();
    socket.on("message", (msg) => {
        console.log(`Message from drone: ${msg.toString()}`);
    socket.on("error", (err) => {
        console.log(`There was an error: ${err}`);
    socket.on("listening", () => {
        console.log("Socket is listening");
    await sendInitCommand(socket);
    console.log(`Please enter a command:`);
    rl.on("line", line => handleInput(line, socket));

The next steps are to get our drone to take off and land, we can finally start adding the code for sending our takeoff and landing commands to the drone. To do this we can create two new functions, sendTakeOff and sendLand, the implementation of these functions is almost exactly the same as the sendInitCommand that we just created, the only differences being the function names, and the string that they are sending to the drone in the socket.send method. The commands required for takeoff and landing are “takeoff” and “land”.

function sendTakeOff(socket) {
    return new Promise((resolve) => {
        socket.send("takeoff",0,"takeoff".length,TELLO_CMD_PORT, TELLO_HOST, err => {
            if(err) {
                throw err;
            else {
                return resolve();
function sendLand(socket) {
    return new Promise((resolve) => {
        socket.send("land",0,"land".length,TELLO_CMD_PORT, TELLO_HOST, err => {
            if(err) {
                throw err;
            else {
                return resolve();

Once we have these functions, things can start to come together. It’s time to wire them up to the switch statement within the handleInput function that we created earlier, essentially what we will do here is to call either the sendTakeOff function or the sendLand function depending on the value of what the user has submitted.

async function handleInput(line, socket) {
    switch (line) {
        case "takeoff":
            console.log("Detected takeoff command.");
            try {
                await sendTakeOff(socket);
            catch (err) {
        case "land":
            console.log("Detected land command.");
            try {
                await sendLand(socket);
            catch (err) {

Now we are finally in a position where we can connect to our drone and get it to fly!
The first step is to turn on the drone and to connect to it via WiFi. The drone will normally use an SSID in the following format TELLO-XXXXX – Where XXXXX is a random set of numbers and characters. My drone uses the SSID “TELLO-D3F981”
Once the drone is connected, as we did previously, we can start our app by running:

> node /src/app.js

If everything Is working as expected we should see the following output in the terminal.
dji tello code
Notice that we are now seeing “Socket is listening” and “Message from drone: ok” in our output. These are messages that have come from the event handlers we added to the “listening” and “message” events earlier on.
When we send commands to our drone, it will sometimes acknowledge that a command has been executed successfully by responding with either the string “ok” or, if the command was not executed successfully a string that represents an error (the value of which will differ depending on the nature of the error).
For other commands the drone will respond with a value, for example, if we send the “battery?” command, the drone will respond with a number between 0 and 100 which is representative of the current battery percentage of the drone.
To get the drone to take off, type “takeoff” and then hit enter. If all has gone as expected the drone should take off and you should see “Message from drone: ok”.
dji tello code
Note: The drone will land automatically if it detects no commands within a 15 second time window.
Once the drone is in the air lets bring it back down to earth. Type “land” and then hit enter, this should make the drone auto-land.
Note: Sometimes after executing a command the drone needs a small amount of time to be ready for the next command. I’m not exactly sure what causes this but normally waiting for a second or so before sending the next command to the drone works.

You’re flying!

So now we have mastered sending commands to our drone and taking off and landing, in the next part of this blog I will show you how to send directional commands. This will allow you to fly your drone forward, back, up, down and on trajectories. Happy hacking!
You can read Part 2 here:

Scott Carpenter

A Certified Scrum Master (Scrum Alliance), Scott is a Senior Software Developer based in the Gofore UK office in Swansea. Passionate about the JavaScript family of technologies (node/React/Angular) and very much enjoys creating awesome apps that run on the client or the server. Scott is also very interested in cloud computing, specifically Amazon Web Services and Google Cloud as well as microservices.

Do you know a perfect match? Sharing is caring

Sometimes there’s a need to fork a git repository and continue development with your own additions. It’s recommended to make a pull request to upstream so that everyone could benefit from your changes but in some situations, it’s not possible or feasible. When continuing development in a forked repo there are some questions which come to mind when starting. So here are some common questions and answers that I found useful when we forked a repository in Github and continued to develop it with our specific changes.

Repository name: new or fork?

If you’re releasing your own package (to e.g. npm or mvn) from the forked repository with your additions then it’s logical to also rename the repository to that package name.
If it’s an npm package and you’re using scoped packages then you could also keep the original repository name.

Keeping master and continuing developing on a branch?

Using master is the sane thing to do. You can always sync your fork with an upstream repository. See: syncing a fork.
Generally, you want to keep your local master branch as a close mirror of the upstream master and execute any work in feature branches (that might become pull requests later).

How you should do versioning?

Suppose that the original repository (origin) is still in active development and does new releases. How should you do versioning in your forked repository as you probably want to bring the changes done in the origin to your fork? And still maintain semantic versioning.
In short, semver doesn’t support prepending or appending strings to version. So adding your tag to the version number from the origin which your version is following breaks the versioning. So, you can’t use something like “1.0.0@your-org.0.1” or “1.0.0-your-org.1”. This has been discussed i.a. semver #287. The suggestion was to use a build meta tag to encode the other version as shown in semver spec item-10. But the downside is that “Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence.”
If you want to keep relation the original package version and follow semver then your options are short. The only option is to use build meta tag: e.g. “1.0.0+your-org.1”.
It seems that when following semantic versioning your only option is to differ from origin version and continue as you go.
If you don’t need to or want to follow semver you can track upstream version and mark your changes using similar markings as semverpre-releases: e.g. “1.0.0-your-org.1”.

npm package: scoped or unscoped?

Using scoped packages is a good way to signal official packages for organizations. Example of using scoped packages can be seen from Storybook.
It’s more of a preference and naming convention of your packages. If you’re using something like your-org-awesome-times-ahead-package and your-org-patch-the-world-package then using scoped packages seems redundant.

Who should be the author?

At least add yourself to contributors in package.json.

Forking only for patching an npm library?

Don’t fork, use patch-package which lets app authors instantly make and keep fixes to npm dependencies. Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn. Now you don’t need to wait around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.

If you have any other questions, then post them in the comments below.

Marko Wallin

Marko works as a full stack software engineer and creates better world through digitalization. He writes technology and software development related blog and developes open source applications e.g. for mobile phones. He also likes mountain biking.

Do you know a perfect match? Sharing is caring

Software development is one of the professions where you have to keep your knowledge up to date and follow what happens in the field. Staying current and expanding your horizons can be achieved in different ways and one good way I have used is to follow different news sources, newsletters, listening to podcasts and attending meetups. Here is my opinionated selection of resources to learn, share ideas, newsletters, meetups and other things for software developers.


There are some good news sites to follow what happens in technology. They provide community-powered links and discussions.


Podcasts provide a nice resource for gathering experiences and new information about how things can be done and what’s happening and coming up in software development. I commute daily for about an hour and time flies when you find good episodes to listen to. Here’s my selection of podcast relating to software development.


Software Engineering Daily: “The world through the lens of software” (available i.a. on iTunes)
Software Engineering Radio: “Targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast”. (feed)
ShopTalk: “An internet radio show about the internet starring Dave Rupert and Chris Coyier.” (available i.a. on iTunes)
Full Stack Radio: “Every episode, Adam Wathan is joined by a guest to talk about everything from product design and user experience to unit testing and system administration.” (feed)


Syntax: “A Tasty Treats Podcast for Web Developers.” (available i.a. on iTunes)
The Changelog: “Conversations with hackers, leaders, and innovators of software development.” (available i.a. on iTunes)
React Podcast: “Conversations about React with your favourite developers.” (available i.a. on iTunes)
Brainfork: “A podcast about mental health & tech”


React Native Radio as a podcast (available i.a. on iTunes)

In Finnish

ATK-hetki: “Vesa Vänskä ja Antti Akonniemi keskustelevat teknologiasta, bisneksestä ja itsensä kehittämisestä.” (available i.a. on iTunes)
Webbidevaus: “Puheradiota webbikehityksestä suomeksi! Juontajina Antti Mattila ja Riku Rouvila.” (available i.a. on iTunes)


Normal information overload is easily achieved so it’s beneficial to use for example curated newsletters for the subjects which intersect the stack you’re using and topics you’re interested at.
The power of a newsletter lies in the fact that it can deliver condensed and digestible content which is harder to achieve with other good news sources like feed subscriptions and Twitter. A well-curated newsletter to a targeted audience is a pleasure to read and even if you forgot to check your newsletter folder, you can always get back to them later.


Hacker Newsletter: Weekly newsletter of the best articles in Hacker News.

Mobile development

iOS Dev Weekly: Hand-picked round up of the best iOS development links published every Friday.
This Week In Swift: List of the best Swift resources of the week.
iOS Dev nuggets: Short iOS app development nugget every Friday/Saturday. Short and usually something you can read in a few minutes and improve your skills at iOS app development.
React Native: Bi-monthly summary of React Native news, articles, issues & pull requests, libraries and apps.


Java Web Weekly by Baeldung: Once-weekly email roundup of Java Web curated news by Eugen Baeldung.
The Java Specialists’ Newsletter: A monthly newsletter exploring the intricacies and depths of Java, curated Dr. Heinz Kabutz.
Java Performance Tuning News: A monthly newsletter focusing on Java performance issues, including the latest tips, articles, and news about Java Performance. Curated by Jack Shirazi and Kirk Pepperdine.


DB Weekly: A weekly round-up of database technology news and articles covering new developments, SQL, NoSQL, document databases, graph databases, and more.


HTML5Weekly: Weekly HTML5 and Web Platform technology roundup. Curated by Peter Cooper.
CSS Weekly: A roundup of css articles, tutorials, experiments and tools. Curated by Zoran Jambor.

Web development

Status code: “Keeping developers informed.” weekly email newsletters on a range of programming niches (links to JavaScript weekly, DevOps weekly etc.)
Web Development Reading List: Weekly roundup of web development–related sources, selected by Anselm Hannemann.
Hacking UI: A weekly email with our favourite articles about design, front-end development, technology, startups, productivity and the occasional inspirational life lesson.
Scott Hanselman: Newsletter of Wonderful Things. Includes interesting and useful stuff Scott has found over the last few weeks and other wonderful things.
MergeLinks: Weekly email of curated links to articles, resources, freebies and inspiration for web designers and developers.
“How to keep up to date on Front-End Technologies” page lists newsletters, blogs and people to follow.


JavaScript Weekly: Weekly e-mail round-up of JavaScript news and articles. Curated by Peter Cooper.
Node Weekly: Once–weekly e-mail round-up of Node.js news and articles.
A Drip of JavaScript: “One quick JavaScript tip”, delivered every other Tuesday and written by Joshua Clanton.
SuperHero.js: Collection of the best articles, videos, and presentations on creating, testing, and maintaining a JavaScript code base.
State of JS: Results of yearly JavaScript surveys

User experience and design

UX Design Weekly: Hand-picked list of the best user experience design links every week. Curated by Kenny Chen & published every Monday. To satisfy your web aesthetics with a list of the 5 best design links of the day. The content is manually curated by a couple great editors.
Userfocus: Monthly newsletter which shares an in-depth article on user experience.


DevOps Weekly: Weekly slice of devops news.
Web Operations Weekly: Weekly newsletter on Web operations, infrastructure, performance, and tooling, from the browser down to the metal.
Microservice Weekly: A hand-curated weekly newsletter with the best articles on microservices.


You can learn much from others and to broaden your horizon it’s beneficial to attend different meetups and listen to how others have done things and watch war stories. Also free food and drink.

Mostly Helsinki based

Tampere (Finland) based

Community chats

Feel free to add your favourite resources in the comments below.

Marko Wallin

Marko works as a full stack software engineer and creates better world through digitalization. He writes technology and software development related blog and developes open source applications e.g. for mobile phones. He also likes mountain biking.

Do you know a perfect match? Sharing is caring

Choosing React Native in 2019

As a head of mobile development, part of my role is to evaluate technologies in order for us to know what are the strengths and weaknesses of the said technology. This will inform us what we should use when a new project is starting up.
Gofore has used React Native extensively for the past two years. We have had success with it and enjoy using it. It is definitely possible to make real production grade apps, as an example, the Google play store Best of 2018 app was made using React Native. Still, there is room for improvement to be able to declare React Native 1.0 stable and ready. During 2018 there was some turbulence and uncertainty regarding the future of React Native. It made me begin closely following the future of the library. Here is a report of my findings.
TL;DR: It is looking like 2019 will be the year of React Native coming of age. There are many ongoing undertakings to make React Native even better than it is today. I am more excited for the future of React Native than I have ever been. Here are some of the (technical) reasons why.

Facebook contributions

React Native was created by Facebook and for a while, in 2018 it was a bit unclear how much Facebook was investing in the technology. After some rumours, Facebook made it clear through words and actions, that they are in it for the long haul.
One good sign that Facebook is ramping up work on React Native is the fact that they have been hiring more developers to their React Native team.
Firstly, React Native will enjoy improvements in React itself, which will have two big new features added in 2019: Hooks and Suspense. Hooks will let developers use state and other React features in functional components. After trying out hooks, I have been eagerly waiting to be able to use them everywhere! Suspense refers to Reacts new ability to “suspend” rendering while components are waiting for something and display a loading indicator. This will ease the typical pain point of having to figure out different states (init, loading, error, ready) for your component. Suspense will manage the complexity for you.
In June, Facebook published a blog post explaining that they are “working on a large-scale re-architecture of React Native to make the framework more flexible and integrate better with native infrastructure in hybrid JavaScript/native apps.”
This rework includes a JavaScript interface (JSI),UI re-architecture (called Fabric) and the new native module system (called TurboModules) but is generally referred to altogether as Fabric. This will offer significant improvements under the hood. It will also improve performance, simplify interoperability with other libraries, and set a strong foundation for the future of React Native.
In November Facebook published a roadmap for React Native. It gives an outline of their vision, including a healthy GitHub repository, stable APIs, a vibrant ecosystem and excellent documentation.
These are all areas where React Native has been criticized and I am really excited to see that Facebook has identified them and is actively working on improvements. It will lay a good foundation for the open source community to participate and contribute.

Open source community

React Native open source community got way more organized in 2018 and it seems we will reap the rewards of that in 2019. There is a new repository for transparent discussions and proposals, which facilitates changes proposed and made by the open source community.
There is an ongoing project called The Slimmening, which aims to make React Native core smaller, extracting parts of it that can be more easily maintained and developed separately. There are already two good examples of this. Jamon Holmgren (@jamonholmgren) championed extracting Webview and Mike Grabowski (@grabbou) spearheaded efforts to extract React Native CLI. Webview has already received much more love as a standalone library and it shows the potential of what The Slimmening, once done, can mean for the future of React Native.
Another ongoing project, which should be close to being finished, is updating Android JSC (which is used to run the Javascript on Android). The current version is archaic and results in differences between iOS and Android and performance issues. Having a modern runtime is crucial for the promise of a truly cross-platform development environment. Upgrading the JSC would greatly improve the performance of react native apps running on Android and allow support for x64 builds on Android apps.
Currently, there are a lot of 3rd party community libraries. The typical challenge with them is that they might not be well maintained. Expo is a company building on top of React Native. They have been pushing to be able to do React Native development without having to have knowledge of the native parts. Expo APIs look well thought of and maintained, but they have not been available out of the Expo React Native project. That is, until now. Having well-maintained community APIs available will make a significant difference for developers.


Hopefully, this list of ongoing technical projects and improvements in and around React Native has given you some insight into the potential that React Native has. Time will tell how well it delivers, but as of now, I am very positive and excited to be a mobile developer using React Native. I will be recommending React Native for many mobile projects at Gofore in the future as well. Let me know your thoughts in the comments below.

Juha Linnanen

Head of Mobile Development at Gofore Helsinki. Passionate about creating mobile applications that help achieve results.

Linkedin profile

Do you know a perfect match? Sharing is caring

Vue tips

What Is Vue CLI?

Vue CLI (version 3) is a system for rapid Vue.js development. It’s a smooth way to scaffold a Vue project structure and allows a zero-config quick start to coding and building. Vue CLI Service, that is the heart of every Vue CLI app, neatly abstracts away common front-end development tools such as Babel, webpack, Jest and ESLint, while still offering flexible ways for configurability and extensibility as your project grows.
Let’s go through a few tips that’ll help you get even more out of your Vue CLI App.

1. Code Splitting And Keeping Bundles Light

Large Vue apps usually use Vue Router with multiple routes. Individual routes might also use various node modules. With Vue Router and webpack’s support for dynamic imports, routes can be automatically split into separate JavaScript and CSS bundles and it’s easy to do, for example in your router.js.

  name: 'profile',
  path: '/profile/:user',
  component: () => import('./views/Profile.vue')

Code splitted routes are loaded only on-demand, so it can have a major benefit on the initial loading time of your app.
Vue CLI also comes with Webpack Bundle Analyzer. It offers a nice birds-eye view of the built app. You can visualize bundles, see their sizes and also the size of modules, components or libraries they consist of. This will come in handy when Vue CLI warns you about bundle sizes getting out of hand, giving you some hints where to trim down the fat.
Vue CLI Service provides an extra --report argument for the build command to generate the build report. Add this handy little snippet in your package.json‘s scripts-section:

"build:report""vue-cli-service build --report"

Running npm run build:report you’ll get report.html generated in your dist folder, which you can then open in your browser.

2. Fine-Tuning the Prefetching

Not only does Vue CLI handle code splitting, it also automatically injects these bundles as resource hints to your HTML’s <head> with <link rel="prefetch" src="bundle.js">. This enables browsers to download the files while the browser is idling, making navigating to different routes snappier.
While this may be a good thing, in larger apps there might be many routes that aren’t meant for the average user. Prefetching these routes will consume unnecessary bandwidth. You can disable the prefetch plugin in vue.config.js:

module.exports = {
  chainWebpack: config => {

And manually choose the prefetchable bundles with webpack’s inline comments:

import(/* webpackPrefetch: true */ './views/Profile.vue')

3. Use Sass Variables Everywhere

Vue’s scoped styles, Sass and BEM are helpful tools for keeping your CSS nice and tidy. You probably would still like to use some global Sass variables and mixins inside your components, preferably without importing them separately every time.
Instead of writing something like this in every component:

<style lang="scss" scoped>
@import '@/styles/variables.scss';

You can add this in vue.config.js:

module.exports = {
  css: {
    loaderOptions: {
      sass: {
        data: `@import 'src/styles/variables.scss;'`

4. Test Coverage with Jest

Vue CLI comes (optionally) with Jest all configured and with Vue Test Utils writing unit tests for your components is a breeze. The CLI Service supports all Jest’s command line options as well. The nice thing about Jest is its built-in test coverage report generator.
To generate a report, again for convenience, you can add another script to your package.json:

"test:coverage""vue-cli-service test:unit --coverage"

Now run it with npm run test:coverage. Not only does it show a report in your terminal, but an HTML-report will also be created in the coverage-folder of your project. You might want to add this to your .gitignore.
Using collectCoverageFrom in your Jest’s config, you can make the coverage also include files that don’t have tests yet, helping you identify and increase the coverage where it’s needed:

collectCoverageFrom: ['src/**/*.{js,vue}']

5. Modern Build for Modern Browsers

Most of us probably still need to take care of users with older browsers. Luckily Vue CLI supports a browserslist config to specify the browsers you are targeting. That configuration is used together with Babel and Autoprefixer to automatically provide the needed JavaScript features and CSS vendor prefixes.
With a single extra --modern argument, you can build two versions of your app; one for modern browsers with modern JavaScript and unprefixed code, and one for older browsers. The best part is, no extra deployment is needed. Behind the scenes, Vue CLI builds your app utilizing new attributes of the <script> tag. Modern browsers will download files defined with <script type="module"> and older browsers will fallback to JavaScript defined with <script nomodule>.
The final addition to your package.json:

"build:modern""vue-cli-service build --modern"

Providing modern browsers with modern code will most likely improve your app’s performance and size.

Tuomo Raitila

Tuomo is a software designer primarily specializing in the user-facing side of web development and design, crafting visually appealing, modern, simple and clean UI’s and websites, whilst always keeping the focus on user experience.

Do you know a perfect match? Sharing is caring

Gofore Project Radar 2018 Summary

By now, it has become an annual tradition at Gofore to conduct a Project Radar survey at some point of the year to gain better insight into our presently running software projects. The 2018 Gofore Project Radar builds on two previous Project Radar iterations, conducted in fall 2016 (in Finnish only) and spring 2017, containing a set of questions relating to currently employed tech stacks, development practices and projected (or hoped-for) technological changes. Most of the questions from last year’s Project Radar made their way into this year’s Project Radar to allow for year-on-year variation detection. We also added some new questions that were considered important enough to warrant their inclusion in the survey.
So with the 2018 Project Radar results in, what can we tell about our projects’ technological landscape? What can we say has changed in our technological preferences and project realities over the past year?

The Gofore Project Radar survey results for 2018 are in! [Click on the image to enlarge it].

End of JavaScript framework fatigue?

Over the past few years, the frontend development scene has shown intermittent signs of “framework fatigue” as a steady stream of new frameworks, libraries and tools has flooded the scene, challenging developers to work hard to keep pace with the latest developments, current community preferences and best practices. A look at our Project Radar data tells us that at Gofore there has been no significant churn when it comes to primary frontend technologies employed by individual projects. Instead, the results indicate a distinct consolidation around React, Angular and Vue.js, the three major contenders in the JS framework race. All these three have gained ground on older frontend techs (AngularJS, jQuery etc.) and ratcheted up their project adoption percentage, React being the top dog at a near-50% adoption rate among projects represented in the survey. If given a chance to completely rewrite their project’s frontend, most respondents would, however, pick Vue.js for the job.
The fact that there was no major change from last year in preferred frontend frameworks is perfectly in line with developments (or lack thereof) on the frontend scene over the past year. While last year saw major releases of both React and Angular roll out (with Vue.js 3.0 still somewhere on the horizon), there were no new frameworks to come along that would have been able to upset the status quo and catch on big time in the community (regardless of distinct upticks of interest in at least Svelte.js and Preact). This stability comes in stark contrast to the unsettled years in the not-too-distant past when the balance of power between different JS frameworks was constantly shifting as new frameworks and libraries appeared on the scene.
Looking beyond the battle of JS frameworks, a significant trend with regard to frontend development is the ever-increasing share of single-page applications among our projects’ frontends. Around 64% of this year’s Project Radar respondents reported to be working with single-page applications, up from 57% in last year’s Project Radar results.

Node.js on the rise

Moving our focus to the backend, where Java has traditionally held a predominant position among our projects, a somewhat different trend emerges. While the Project Radar data clearly brought out a tendency toward consolidation around the three major frontend frameworks, the picture on the backend side, on the other hand, looks a little more fragmented. Last year’s Gofore Project Radar pegged Java usage at nearly 50% among all projects represented in the survey, trailed by Node.js and C# each with a 15% share of the cake. While Java still came out on top this year, it was reported as the primary backend language in only 32% of the projects, down a whopping 15 points from last year’s results.
This drop was fully matched by an upward surge by Node.js, which more than doubled its share of the overall pie, up 17 points from last year. While C# stood its ground at close to 15%, a crop of new languages, missing from previous years’ results, entered the fray in the from of Kotlin, Clojure and TypeScript. Regardless of there being only a handful of projects where they were reported as being primary backend languages, they contributed to the growing share of minority languages in our backend landscape, a group previously comprised of Scala, Python, Ruby and PHP.
Similarly to how respondents were asked to choose their hoped-for replacement tech for their frontends, we also asked our developers what was their preferred language for rewriting their backends if given the chance to do so. Last year most respondents would take the cautious approach and stick with their previously established backend languages. This year, however, there was considerable interest in rewriting backends in Kotlin, particularly among respondents who reported Java as their primary backend language (55% of all respondents were eager to switch to Kotlin from some other language).
Before drawing any conclusions from these statistics, it should be noted that upwards of 55% of respondents reported to be working with a microservices-type backend stack, suggesting that potentially multiple languages and server-side frameworks might be used within a single project. Still, the appeal of Kotlin, particularly among Java developers, is clearly apparent, as is the shift toward Node.js being the centerpiece of most of our backend stacks.
While the Project Radar does not shed any light on the reasons behind any technological decisions, the increasing popularity of Node.js can probably be put down to the above-mentioned prevalence of microservices-esque backend setups, where Node.js often slots in to serve as an API gateway fronting other services, which, in turn, might be written in other languages. Another contributing factor might be the emergence of universal JavaScript applications, where the initial render is handled by running JavaScript on the backend.
The popularity of Kotlin, on the other hand, has been picking up ever since Google enshrined it as a fully supported language for Android development. Considering its status as one of the fastest-growing programming languages in the world, its increasing presence in server environments is hardly surprising.

Going serverless

Now where do we run our project infrastructure in the year 2018? According to last year’s Project Radar results, more than two thirds (68%) of all respondents were still running their production code in a data center that was managed either by the client or a third party. This year, that number had come down to 59%. While this isn’t particularly surprising, what is mildly surprising, though, is the fact that IaaS-type infrastructure saw an even greater decline in utilization. Only 47% of all respondents reported to be running their production code in an IaaS (Infrastructure as a Service) environment, as opposed to 60% last year.
As the utilization of both traditional data center environments and IaaS services fell off, PaaS (Platform as a Service) and, especially, serverless (or FaaS, Function as a Service) platforms were reported to take up a fair portion of the overall share of production environments. While still in the minority, PaaS services were reported to be used by 12% of all respondents, tripling their share of 4% from last year, and serverless platforms by 16.5% of all respondents (no reported usage last year as there was no dedicated response option for it).
As our projects’ production code is more and more removed from the actual hardware running it, containerization has also become more commonplace, as evidenced by the fact that Docker is now being used by 76% of all respondents (up from 43% last year). Despite Docker’s increasing adoption rate, there wasn’t much reported use for the most popular heavy-duty container orchestration platforms: Kubernetes, Docker Swarm, Amazon Elastic Container Service and OpenShift Container Platform were only reported to be used by 14% of all respondents.
Since running our code in cloud environments enables shorter deployment intervals, one could think we’d be spending more time flipping that CI switch that kicks off production deployment. And to some extent, we do: we have fewer projects where production deployments occur only once a month or less often (10% as opposed to 20% last year), but, somewhat surprisingly, fewer projects where production deployments are done on a daily basis (10.5% vs 12% last year).

Miscellaneous findings

  • Key-value databases doubled their reported project adoption (32% vs 16.5% last year)
  • Jenkins was the most prevalent CI platform among represented projects, with a 57% adoption rate (its closest competitor, Visual Studio Team Services/Azure DevOps well behind at 17%)
  • Close to nine percent of all respondents reported to be using a headless CMS (Content Management System)
  • Ansible was being used by 71% of respondents who reported using some configuration management (CM) tool, clear ahead of any other CM tools (Chef was being used by a little shy of eight percent of CM tool users, while Puppet had no reported users)
  • Development team sizes were smaller than last year (57% of dev teams had five or more team members last year, whereas this year such team sizes were reported by 52% of respondents)
  • The reported number of multi-vendor teams was smaller than last year (41% vs 47% last year)
  • Most respondents reported to be working on a project that had been running 1-3 years at the time of responding
  • Most project codebases clock in at 10k – 100k in terms of LOC (lines of code)
  • Scrum was the most favored project methodology, being employed by nearly 51% of all represented projects. Kanban, on the other hand, saw the most growth of any methodology (22% vs 12% last year)

Some closing thoughts

Once again, the annual Project Radar has told us a great deal about our preferred programming languages, frameworks, tooling and various other aspects of software development at Gofore. While the survey is by no means perfect – and I can easily come up with heaps of improvement ideas for the next iteration – the breakdown of its results enables us to more easily pick up technological trends in our ever-increasing multitude of projects. This year’s key takeaways are mostly reflections of industry trends at large, but there are some curiosities that would be hard, if not impossible, to spot if not for the Project Radar. The usefulness of these findings is debatable, as some of them fall under trivia, but still they come as close to a “look in the mirror”, technology-wise, as one can get at a rapidly growing company of this size.

Henri Heiskanen

Henri is a software architect specializing primarily in modern web technologies, JavaScript/Node.js & JVM ecosystems and automated infrastructure management. A stickler for clean code and enforcement of best practices in project settings, Henri is uncompromising in delivering well-tested, high-quality code across the stack.

Do you know a perfect match? Sharing is caring

Gofore Bots
I wrote a blog post last year about how bots are used to automate routine work in our company (Gofore). The same topic is even more relevant today when we are stepping into an era of AI. Let’s see what has happened to our bots since my last blog.

30 little bots

Today we have around 30 active bots that integrate to Slack. Almost half of these slackbots are focused on utilisation and billing functions. Reliable utilisation and billing are a consulting company’s engine oil that enables all other activities. These bots control peoples’ marking hours, calibrate utilisation capacities, remind to bill customers and recognise human errors. Utilisation and billing were also the first automated functions.
The other significant group is reporting slackbots. All companies have a lot of business-critical information that needs to be made visible to employees. Slackbots list, for example, customer statistics, site-based information and the most impacted blog and social media posts. These slackbots also can be used on-demand.
The third group of slackbots is everything else. We have an overtime bot, SLA-observer bot and bots for the sales team. One slackbot updates users’ vacation statuses and the other connects people for a beer.

In God we trust, others bring bots 

Basically, a bot is a piece of software that performs automated tasks.  Despite this, bots have advantages that many other applications are missing. I have listed the three most important ones.
A Slackbots’ best asset is simplicity because a bots’ user interface is mostly text and icons. In the same way, interaction with them is based on text and not graphical forms or other UI elements. Some bots are totally invisible for users and just run in the background.
A Slackbots’ simple user interface helps to focus on essentials. There is no need to spend time with responsive design challenges or debugging the newest JavaScript framework defects. Product planning can be targeted on feature impacts and user needs validation.
The second advantage is a bots’ overall popularity. Many users have used bots previously, hence a bot’s behaviour is well-known. For this reason, intensive training and user guides can be avoided. Bots’ messages are displayed in different slack channels continuously, so promotion also happens naturally.
The third advantage is the Slack platform. Slack provides a smooth user experience, out-of-the-box services (security, authentication, performance, data storage etc.), wide device support and excellent integration options. Although all our bots are handmade, Slack has speeded up our development enormously.

Value for life

The value proposition is the reason that the product exists; this can be summarised in three points in our case. Better job satisfaction means bots take care of boring and repetitive tasks and let people work on meaningful and interesting duties. The cost saving aspect focuses on time-saving and error sensitive functions. Practically, our bots have replaced a big part of middle management tasks already. Improved decision-making means that business-critical data is visible for everybody 24/7. Every new bot idea is validated and prioritised against these three factors.
Some months ago, our bot team created an internal survey regarding how people feel about our slackbots. The results were very promising – 95% of people think that the bots are useful and 30% of people think that the bots are vital to the company. This feedback gave an extra boost and motivation to the whole team to continue development work.

Work in progress

My estimate is that our company still have around 20-30 manual processes that can be easily automated by bots. Parts of the recruiting process, subcontractor management, credit card administration and device handling, just to name a few. After this low hanging fruit has been picked, it’s time to add more AI to bots.
The outcome of many internal projects is mediocre. In contrast, bots bring value to our company every single day. When it has been said more than once that these bots are actually part of our company’s competitive advantages, you know that product development has reached a goal.
Juhana Huotarinen – the proud Product Owner of the Gofore Bot Team
Graphic design
Ville Takala

Juhana Huotarinen

Juhana Huotarinen is a lead consultant of software development at Gofore. Juhana’s background is in software engineering and lately, he has taken part in some of the biggest digitalisation endeavours in Finland. His blogs focus on current topics, involving agile transformation, software megatrends, and work culture. Juhana follows the ‘every business is a software business’ motto.

Do you know a perfect match? Sharing is caring

In my last blog post I shared my ideas about some nice features our meeting room system should have – one was measuring air quality in meeting rooms. Soon after publishing the blog post, I got a call from Mika Flinck from Digita who offered a helping hand to develop this feature. After the call, Digita sent two Elsys ERS-CO2-sensors, which work on Digita’s Long Range Wide Area Network (LoRaWAN), for us to use for developing and testing purposes. The sensors can measure a room’s temperature, moisture, level of lightness and carbon dioxide (CO2).
Elsys ERS-CO2-sensor

One of the Elsys ERS-CO2-sensors in Tampere.

LoRaWAN is a wireless Low Power, Wide Area Network (LPWAN) networking protocol which is administrated by the LoRa Alliance association. IoT-devices, which work in LoRaWAN, can have batteries that last up to 10 years due to the of low powered technology and typically devices send messages to the network infrequently, like every 15 minutes.
Architecture of the current solution
Architecture of the current solution.
In Digita’s LoRaWAN all messages and commands are handled via Actility Thingpark which works as a gateway between LoRaWAN and the Internet. In our case, Actility Thingpark will resend all messages in the JSON-format from LoRaWAN to Amazon Web Services’ (AWS) API Gateway. After that, the API Gateway sends messages to Lambda which decodes the Elsys payload and the decoded information is finally sent to our meeting room system in EC2. All client systems can get updated information from the server.

What is good room air quality?

For the meeting room system, I used several sources for gathering ideal values of good air quality. I preferred using information from the Finnish Institute of Occupational Health (FIOH) and The Organisation for Respiratory Health, which contained recommendations for air temperature and moisture according to seasonal and weather conditions. Also, working conditions give some frames for good room air quality. I used the following values for our meeting room system.

Moisture (%)
Carbon dioxide (PPM)
Temperature (°C)
Good 25 – 45 < 800 20 – 23
Bad 0 – 25 or
45 – 70
800 – 1150 19 – 20 or
23 – 25
Very bad > 70 > 1150 < 19 or > 25

Limits are averaged from several sources and our daily work is in an office environment. Now the meeting room tablets can visualize the level of each metric using different colours. In the future, we will develop a feature in which all the limits are drawn on the timeline graphs and visualize any points exceeding these limits.

A timeline graph

A timeline graph from the meeting room in Jyväskylä. The graph can be zoomed and panned. Users can hover on the graph to get detailed values.

A graph from the current solution
The upper right corner shows the latest information of air quality on the tablet view.

Last thoughts

LoRaWAN-sensors are very easy to handle, just configure and forget. In the ideal situation, you must change the battery in the sensor after a few years and nothing else needs to be done. Now the meeting room systems have configuration support where we can determine in which room the sensor is located. When a sensor is moved to a new location, we just link the sensor to the new room.
For measuring air quality, I have a vision of bringing peoples’ subjective opinions which will be combined with sensor data. This will make us smarter in terms of what is good air quality, especially when taking into account how many people were in the room. Maybe someday our Seppo-bot can ask a few simple questions after the meeting.
Big thanks for Mika Flinck from Digita for lending us the LoRaWAN-sensors for developing and testing purposes! This was a great opportunity to learn about LoRaWAN and develop our meeting room system further.

Jarkko Koistinaho

Jarkko works as a technical project manager at Gofore and he is a quality and testing oriented professional in the software industry. Depending on the situation, he could be a software engineer or a Scrum Master in addition to being a software tester. Jarkko can also do some DevOps-tasks. Model-based testing and performance testing are his special skills.

Do you know a perfect match? Sharing is caring

Dave Snowden says that human intelligence is especially based on utilizing patterns. “Our ability to link and blend patterns … gives us the ability to adapt to a rapidly changing context and, critically to innovate”. Metaphors, patterns and stories are the most powerful tool of explanation, knowledge transfer and teaching. We have a limited capacity of information processing capability, but it is not the basis of our intelligence. One group of people make decisions purely based on information processing, and they are autistic. Whether you are communicating a feature in a software project, or a marketing vision of a large enterprise, the best way forward is to communicate it as a story or as a metaphor.
best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means. You may find best practices for example in agriculturemedical and manufacturing. Software development has its own best practices. Agility, continuous improvement, prioritization, active communication, design patterns, QA and CI/CD. Agile manifesto itself is a list of best practices. For a seasoned expert, these sound like a no-brainer. Well, how on earth do projects still fail?
While working as a Scrum Master, one realizes that while pursuing well-known best practices, the same bad practices keep popping up again and again. These bad practices form a list of common pitfalls, which are called antipatterns or in an Agile context ScrumBut. The antipatterns can be used as-is, but they are most effective when used side by side with the best practices. Work towards the best practices. Work away from the anti-patterns.


Sprint is too short or too long

  • Too short a Sprint (less than a week) increases the ratio of bureaucracy beyond an acceptable level. Too long a Sprint (a month or more) makes it hard to react to customer requests and the feedback loop becomes too long.
  • The best practice is to keep the Sprint a bit on the short side, so the feedback and correction frequency stays fast.
  • Too long a sprint also allows one to create mini waterfalls inside a sprint
  • Often is better

Sprint produces nothing working 

  • Whatever was developed during the Sprint must be running in at least the QA-environment at the end of the Sprint.
  • When software is not integrated, tested or deployed during the Sprint, work will overflow to the next Sprint. This means that it is impossible to change direction between the Sprints. One must first pay off the technical debt that has been created in a form of non-functional code. If nothing new works after the Sprint, the speed is 0.
  • Often the “almost ready” features require days of work to finish. You can speed up the completion of difficult tasks by having daily status checks. In the most critical situations, the best way is to set up a “war room”, where everybody concentrates only on finishing or fixing a single feature.
  • The best practice is to finish the earlier task before starting a new one.
  • Stop starting and start finishing

No (real) Product Owner

  • The Product Owner prioritizes and clarifies.
  • The Product Owner provides a list of the most important Use Cases and a detailed description of each. The task of a Scrum Master is to tell if those features fit into the next Sprint. The less you start, the faster you finish.
  • A common malfunction is a customer who wants software to be developed but doesn’t commit to the development process. Either there is no Product Owner, or the person is too busy/doesn’t know the domain/doesn’t have the authority to make decisions. In this situation the Scrum Master should coach the customer; book a weekly meeting with the customer representative where the roadmap is being clarified and broken down into detailed tasks.
  • The best practice is for the Product Owner to have a detailed plan for the next few Sprints.
  • The task of a Product Owner is to maximize the value of the Agile team

Requirements are not communicated

  • A big up-front requirement specification helps nobody.
  • Use Cases must be communicated between the Product Owner and the development team before the development work starts.
  • The development should never be based only on a document. This will always cause the first version of the software to come out as unsatisfactory. It is much faster to clarify the details over a telco, than to code-test-integrate for weeks and to find out that the requirement spec wasn’t top notch.
  • Coding a non-communicated requirement creates useless technical debt.
  • The best practice is to have a clear and shared plan for the next few Sprints

No estimates

  • Prioritization forces the team to have important discussions regarding the content and their importance. What user group to serve first? Non-functional or functional first? New features or regulations? The common symptom of “no estimates” is that the Sprints produces nothing functional.
  • It’s the team that estimates the stories being planned for the next sprint, and it’s the team that commits to do their best meeting the sprint goals.
  • When the Use Cases have estimates, it is easier to prioritize and plan their development. It is also easier to identify the cases, which need to be divided into more fine-grained Use Cases. Estimates are created semi-automatically when the requirements are communicated to the team. When the Product Owner clarifies the Use Case and the team divides it into technical tasks, estimates will emerge. Estimates will also inform the team velocity. If you divide all stories to roughly similar sizes then you can skip estimation and just count the number of stories.
  • The best practice is to have a clear and shared plan for the next few Sprints

No prioritization

  • No prioritization usually leads to starting too many Use Cases. This leads to Sprints that produces nothing.
  • Incorrect prioritisation also delays feedback on critical decisions, while the team develops something irrelevant.
  • The best practice is to have the product backlog prioritized for the next couple of Sprints. It should be fairly easy for the Product Owner, Scrum Master or the Architect to explain the Use Cases of the next few Sprints.
  • The best practice is to start nothing new before something finishes AND then start the highest priority task.
  • Less is faster

No feedback

  • Each Sprint should provide the team with feedback. The customer gets the demo. The team gets the retrospect. Each Sprint is a step towards better practices. One aspect is the project management tools with their analyses and reports.
  • The best practice is to produce a functional demo and have a Sprint Retrospect. Repeat, the best practice is to have a retrospect after every Sprint.
  • Reflect regularly

Inefficient individual

  • The best way forward is to support or coach the inefficient team member. Usually, people are not lazy but too busy. Mentoring and coaching are good ways forward.
  • The best practice is for everyone to keep developing and improving.
  • Each team member must allocate time for self-development

Inefficient team

  • The team must be able to self-organize. Team members must be able to work individually, but more importantly they must be able to work as a team. Shared ways and common rules. Cherry picking when choosing tasks, coding too low or high quality, forgetting the common rules etc. are signs of a hot-shot not able for teamwork.
  • The team must allocate time for team development

Doing the right things at the right time

Agility means making the right decisions at the right moment. Traditional management models are more interested in doing things right. This leads to an inside-out way of thinking. “We have a CMMi5 level quality standards”. And still, the customer gets no additional value. Such management creates tons of additional bureaucracy without creating anything useful. Agile development advances with the best possible speed towards the best possible direction. Speed and direction are revised constantly when new information and feedback emerges. In sports, one improves agility by doing neuromuscular exercises in a well-rested condition. The same goes for knowledge work. Stress, fatigue and haste kill agility; Work becomes stiff and laborious. W. E. Deming stated that 95% of performance issues are caused by the system itself and only 5% is caused by the people. Even talented people will fail if the system does not give them room and freedom to prosper. Organize outside-in. Customer first.
Exploring ScrumBut—An empirical study of Scrum anti-patterns

A fixed project always fails
Fail transparently

Why to scale Agile
How to scale Agile

Reflect the ways of working in your organization

Jari Hietaniemi

Linkedin profile

Do you know a perfect match? Sharing is caring

private cloud
The software world has been automating the building and updating of server infrastructure for quite a long time now. That’s all well and good, but it feels like the  tool support and the culture of developing infrastructure automation is lacking behind regular software development. While infrastructure automation usually doesn’t provide value directly to the end-users, it’s vital in making the actual running system and its deployment reliable, secure, up-to-date, repeatable, fault-tolerant, resilient, documented and so on.

When it comes to developing the actual product, software developers have been adopting practices and tools that speed up the development cycle and simultaneously increase the overall quality of the service you’re running. These practices include things like test automation, collaboration, pair working, short feedback cycles, peer reviewing, modularisation, adoption of functional programming practices and continuous integration/deployment. You should integrate these practices into the development of infrastructure automation too.

As always, it depends on the context which tools and methods one should use, but I’ll make some suggestions as to how you might be able to improve your infrastructure development. The suggestions might sound too obvious, but for some reason, infrastructure development has been treated as a second-class citizen and a lot of the obvious improvements have been neglected. The purpose of improving your infrastructure automation development process is, in the end, to reduce inefficiencies and improve the reliability and speed of deployment.

Explicitly Reserve Time and Resources for Infrastructure Development

time is moneyDeveloping infrastructure automation takes time and resources, but it is often treated as a side activity that doesn’t need allocation of resources. In our experience the workload of infrastructure/ops work and its automation takes roughly 1/5th – 1/10th of the resources compared to the development of the actual product or service in a typical server-driven environment. This is of course highly context dependent and the above shouldn’t be used as a strict guideline. But going way beyond this range might indicate misallocation of resources. And, at the very least, you shouldn’t feel guilty if you’re spending that much time for infrastructure/ops automation. It does take time.

The question of how to divide the effort to create and maintain infrastructure is a bit complicated. It is true that the DevOps culture tries to close the gap between traditional Dev and Ops roles, but it is rare that the Dev team takes the full responsibility for the infrastructure automation. Also, it seems that there is a tendency inside Dev teams for specific people to specialise to these tasks. I believe that the old strict separation of Dev and Ops activities is clearly inferior, but the type of collaboration model that works in each individual situation depends highly on the circumstances.

Make Your Infrastructure Development Cycle More Like a Product Development Cycle

When you accept that the development of infrastructure automation is similar (at least in some aspects) to any other software development activity, you need to start thinking how to incorporate the practices that have made software development faster, more reliable and more agile to your infrastructure automation development process. No longer should you be making one-line changes whose impact you only see in the production environment when the deployment of your software fails and your production suffers.

The picture below shows the typical arrangement of development activities and too often infrastructure automation is not part of the normal development cycle but is done ad-hoc:

continuous development cycle

Also, Continuously Integrate Infrastructure Changes

waiterYou probably are already building your software and running test suites for it after each commit. You should do this to your infrastructure automation code too. Preferably in a modular, isolated way. If possible, build tests for your infrastructure automation code too. Like in regular software development, these tests in a unit testing level should be fast and easy-to-run.

You could argue that if your continuous integration already runs your infrastructure automation when building your actual software then you don’t need to independently check infrastructure automation code. This is true to the extent that you don’t want to make duplicate tests for your code. Optimally exactly one test should fail if you introduce a bug. However, this sort of separation is really difficult to achieve. Furthermore, you should be able to test the infrastructure automation code in a shorter feedback cycle than as a part of some long-running operation that doesn’t clearly indicate the fault.

Run Your Infrastructure Code against Clean Environments

Special attention should be spent to decide how your infrastructure code is tested on clean, pristine environments. One of the big reasons to automate the infrastructure building in the first place was to provide a deterministic and documented way to recreate the whole environment from scratch. But if you don’t regularly do this you will get into a situation where this is no longer possible. Often the changes might work when added incrementally, but not when the whole environment is built from scratch.


The tool support for making virtual environments has taken giant leaps during the last decade, but it is far from a solved problem in my opinion. The speed at which you can set up the whole process and eventually individual virtual environments is still typically too slow. It should be a matter of minutes to set it up and matter of seconds to fire up new environments and run your code on it to make the development cycle comfortable.

The use of containers (Docker and alike) has helped this significantly, but it usually only involves the individual app or component. For any non-trivial software system, you’re going to have at least few containers and the surrounding glue and supporting services like monitoring.

Don’t Forget Documentation

One of the goals of infrastructure automation is to provide documentation of your infrastructure: as program code. This is vastly better than outdated manual documents or worse still some passed-on knowledge that’s only in few people’s heads. But as we know from “self-documenting code” in regular software development this is mostly just a fantasy. Even in the best circumstances, the code only documents the ‘how’ but leaves the ‘why’ unanswered. And the circumstances are not always the best.

This is why I recommend developing practices that encourage documentation of your infrastructure and also the actual automation code too. You will need high-level pictures, design documents, comments and well-designed and well-named infrastructure code. Perhaps one day all of your current team will be replaced by someone else and those quick hacks and strange decisions will make no sense whatsoever.

Think in Modules and Make Things Testable

I feel that there’s a tendency to neglect the practice of modularisation when making infrastructure code. What I mean is that in regular software development we’re careful not to introduce dependencies between functionally separate parts of the code, but with infrastructure automation, we sometimes slip these sorts of unnecessary interlinks between modules. What this often manifests as is a complicated or long procedure to enable testing of a simple component. For an example, it might be that running a single component in your setup first requires setting up multiple other components. Ideally, you should be able to functionally test the infrastructure code for your component without any extra dependencies.

This is of course highly dependent on the architecture of your actual infrastructure and sometimes you cannot avoid these dependencies. But I still think it is an important point to consider when developing the infrastructure and its automation. I find it helps to think in terms of test-driven development even if the actual testing as a unit is unfeasible. If you realise that in order to test a single component you would need to run all your infrastructure code first, you might consider a different approach for your code.

Peer Review for Improved Understanding and Efficiency

pull request iconPeer reviewing is an easy and obvious way to improve your infrastructure development. It is too often the case that infrastructure development is left outside peer reviewing and other collaboration methods which further separates it from other development activities and often leaves it to only one or few people’s responsibilities. I’m not saying that each and every developer should contribute equally to all infrastructure development, but with peer reviewing you can spread knowledge and improve the quality of your code without too much effort.

If the reaction to a pull request of a developer who is inexperienced in ops/infra is that he/she cannot understand the change, then you should consider this as a great opportunity to teach the inner workings of your infra and secondly improve the change in a way that it makes it easier to understand. For example, adding comments or documentation of why the change was necessary or just improving the naming of things and so on.


The main lesson is that you should stop considering infrastructure automation as a side activity compared to the regular software development and start doing things that speed up the development and make the end product less error-prone. Improving the development process of infrastructure automation is not just yak-shaving: its purpose is to save you and your customer money. Finding bugs earlier in your development cycle is always a good idea and this applies to infrastructure automation too.

Jarno Virtanen

Do you know a perfect match? Sharing is caring