What could be more annoying than committing code changes to a repository and noticing afterwards that the formatting isn’t right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately, with small enhancements to your development workflow, you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.

Git Hooks

Git hooks are scripts that Git executes before or after events such as commit, push, and receive. They’re a built-in feature and run locally. Hook scripts are only limited by a developer’s imagination. Some example hook scripts include:

  • pre-commit: Check the commit for linting errors.
  • pre-receive: Enforce project coding standards.
  • post-commit: Email team members of a new commit.
  • post-receive: Push the code to production.

Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You’re free to change or update these scripts as necessary, and Git will execute them when those events occur.
Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready to commit or pushing to a remote repository.
For more reading about Git hooks, you can check missing Git hooks documentation, read the basics and check tutorial how to use Git hooks on local Git clients and Git servers.

Pre-commit

One productive way to use Git hooks is a pre-commit framework for managing and maintaining multi-language pre-commit hooks. Read tips for using a pre-commit hook.
Pre-commit is nice for example running linters to ensure that your changes conform to coding standards. All you need is to install pre-commit and then add hooks.
Installing pre-commit, ktlint and pre-commit-hook on MacOS with Homebrew:

$ brew install pre-commit
$ brew install ktlint
$ ktlint --install-git-pre-commit-hook

For example, the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The “export PATH=/usr/local/bin:$PATH” is for SourceTree to find git on MacOS.

#!/bin/sh
export PATH=/usr/local/bin:$PATH
# https://github.com/shyiko/ktlint pre-commit hook
git diff --name-only --cached --relative | grep '\.kt[s"]\?$' | xargs ktlint -F --relative .
if [ $? -ne 0 ]; then exit 1; else git add .; fi

The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.

Maven projects

Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.
It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project’s initial lifecycle phase. It’s good to notice that the plugin rewrites hooks.
Usage Example:

<build>
    <plugins>
	<plugin>
	    <groupId>org.sandbox</groupId>
	    <artifactId>githook-maven-plugin</artifactId>
	    <version>1.0.0</version>
	    <executions>
	        <execution>
	            <goals>
	                <goal>install</goal>
	            </goals>
	            <configuration>
	                <hooks>
	                    <pre-commit>
	                         echo running validation build
	                         exec mvn clean install
	                    </pre-commit>
	                </hooks>
	            </configuration>
	        </execution>
	    </executions>
	</plugin>
    </plugins>
</build>

Git hooks for Node.js projects

On Node.js projects, you can define scripts in package.json and run them with npm which enables another approach to run Git hooks.
Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.
Installing Husky is like any other npm library

npm install husky --save-dev

The following configuration on your package.json runs lint (e.g. eslint with –fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to a remote repository.

"husky": {
   "hooks": {
     "pre-commit": "npm run lint",
     "pre-push": "npm run lint && npm run test"
   }
}

Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.

Summary

Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.

Marko Wallin

Marko works as a full stack software engineer and creates better world through digitalization. He writes technology and software development related blog and developes open source applications e.g. for mobile phones. He also likes mountain biking.

Do you know a perfect match? Sharing is caring

Using version control is an essential part of modern software development and using it efficiently should be part of every developer’s tool kit. Knowing the basic rules makes it even more useful. Here are some best practices that help you on your way.
tl;dr;

  1. Commit logical changesets (atomic commits)
  2. Commit Early, Commit Often
  3. Write Reasonable Commit Messages
  4. Don’t Commit Generated Sources
  5. Don’t Commit Half-Done Work
  6. Test Before You Commit
  7. Use Branches
  8. Agree on a Workflow

Commit logical changesets (atomic commits)

A commit should be a wrapper for related changes. Make sure your change reflects a single purpose: the fixing of a specific bug, the addition of a new feature, or some particular task. Small commits make it easier for other developers to understand the changes and roll them back if something went wrong.
Your commit will create a new revision number which can forever be used as a “name” for the change. You can mention this revision number in bug databases, or use it as an argument to merge should you want to undo the change or port it to another branch. Git makes it easy to create very granular commits.
So if you do many changes to multiple logical components at the same time, commit them in separate parts. That way it’s easier to follow changes and their history. So working with features A, B and C and fixing bugs 1, 2 and 3 should make at least 6 commits.

Commit Early, Commit Often

It is recommended to commit code to version control often which keeps your commits small and, again, helps you commit only related changes. It also allows you to share your code more frequently with others.
It’s easier for everyone to integrate changes regularly and avoid having merge conflicts. Having a few large commits and sharing them rarely, in contrast, makes it hard to solve conflicts.

“If the code isn’t checked into source control, it doesn’t exist.”
Coding Horror

Write Reasonable Commit Messages

Always write some reasonable comment on your commit. It should be short and descriptive and tell what was changed and why.
Begin your message with a short summary of your changes (up to 50 characters as a guideline). Separate it from the following body by including a blank line.
It is also useful to add some prefix to your message like Fix or Add, depending on what kind of changes you did. Use the imperative, present tense (“change”, not “changed” or “changes”) to be consistent with generated messages from commands like git merge.
If fixing some bug or making some feature and it has a JIRA ticket, add the ticket identifier as a prefix.
For example: “ISSUE-123 Fix bugs in the dropdown component for selecting items.” or “ISSUE-1234 Fix bad allocations in image processing routines”
Not like this: “Fixed some bugs.”
The body of your message should provide detailed answers to the following questions: What was the motivation for the change? How does it differ from the previous implementation?

“If the changes you made are not important enough to comment on, they probably are not worth committing either.”
loop label

Don’t Commit Generated Sources

Don’t commit files which are generated dynamically or which are user dependent. Like target folder or IDEA’s .iml files or Eclipse’s .settings and .project files. They change depending on what the user likes and don’t relate to the project’s code.
Also, the project’s binary files and Javadocs are files that don’t belong to version control.

Don’t Commit Half-Done Work

You should only commit code when it’s completed. Split the feature’s implementation into logical chunks and remember to commit early and often. Use branches or consider using Git’s Stash feature if you need a clean working copy (to check out a branch, pull in changes, etc.).
On the other hand, you should never leave the office without committing your changes to a branch (on remote repository).

“It’s better to have a broken build in your local working repository on a branch than a working build on your broken hard drive.”

Test Before You Commit

You should only commit code which is tested and passes tests. And this includes code formatting with linters. Write tests and run tests to make sure the feature or bug fix really is completed and has no side effects (as far as one can tell).
Having your code tested is even more important when it comes to pushing/sharing your code with others.

Use Branches

Branching is one of Git’s most powerful features – and this is not by accident: quick and easy branching was a central requirement from day one. Branches are the perfect tool to help you avoid mixing up different lines of development.
You should use branches extensively in your development workflows: for new features, bug fixes and ideas.

Agree on a Workflow

Git lets you pick from a lot of different workflows: long-running branches, topic branches, merge or rebase, git-flow.
Which one you choose depends on a couple of factors: your project, your overall development and deployment workflows and (maybe most importantly) on your and your teammates’ personal preferences. However you choose to work, just make sure to agree on a common workflow that everyone follows.
Atlassian has done a good article of comparing workflows to suit your needs and covers centralized, feature Branch, git flow and forking workflows.
Simplified Git Flow(source: https://buildazure.com/2018/02/21/introduction-to-git-version-control-workflow/)

Summary

Using version control is usually and fortunately acknowledged best practice and part of software development. By using even a couple of the above practices makes working with the code much more pleasant. Adopting at least “Commit logical changesets” and “Reasonable Commit Messages” helps a lot.

Marko Wallin

Marko works as a full stack software engineer and creates better world through digitalization. He writes technology and software development related blog and developes open source applications e.g. for mobile phones. He also likes mountain biking.

Do you know a perfect match? Sharing is caring

Code quality in a software development project is important and a good metric to follow. Code coverage, technical debt and vulnerabilities in dependencies are a couple of things you should follow. There are some de facto tools you can use to visualize things and one of them is SonarQube. Here’s a short technical note of how to set it up on a Kotlin project and visualize metrics from different tools. We are using Detekt for static source code analysis and OWASP Dependency-Check to detect publicly disclosed vulnerabilities contained within project dependencies.

Visualizing Kotlin project metrics on SonarQube

SonarQube is a nice graphical tool to visualize different metrics of your project. Lately, it has also started to support Kotlin with the SonarKotlin and sonar-kotlin plugins. From a typical Java project, you need some extra settings to get things working. It’s also good to notice that the support for Kotlin isn’t quite yet there and the sonar-kotlin provides better information i.e. what comes to code coverage.
Steps to integrate reporting to Sonar with maven build:

  • Add configuration in project pom.xml: Surefire, Failsafe, jaCoCo, Detekt, Dependency-Check
  • Run Sonar in Docker
  • Maven build with sonar:sonar option
  • Check Sonar dashboard

SonarQube project overview
(SonarQube project overview)

Configure a Kotlin project

Configure your Kotlin project built with Maven to have test reporting and static analysis. We are using Surefire to run unit tests, Failsafe for integration tests and JaCoCo to generate reports for e.g. SonarQube. See the full pom.xml from an example project (coming soon).

Test results reporting

pom.xml

<properties>
<sonar.coverage.jacoco.xmlReportPaths>${project.build.directory}/site/jacoco/jacoco.xml</sonar.coverage.jacoco.xmlReportPaths>
</properties>
<build>
    <plugins>
        <plugin>
            <groupId>org.jacoco</groupId>
            <artifactId>jacoco-maven-plugin</artifactId>
            <executions>
                <execution>
                    <id>default-prepare-agent</id>
                    <goals>
                        <goal>prepare-agent</goal>
                    </goals>
                </execution>
                <execution>
                    <id>pre-integration-test</id>
                    <goals>
                        <goal>prepare-agent-integration</goal>
                    </goals>
                </execution>
                <execution>
                    <id>jacoco-site</id>
                    <phase>verify</phase>
                    <goals>
                        <goal>report</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
                <skipTests>${unit-tests.skip}</skipTests>
                <excludes>
                    <exclude>**/*IT.java</exclude>
                    <exclude>**/*IT.kt</exclude>
                    <exclude>**/*IT.class</exclude>
                </excludes>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-failsafe-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>integration-test</goal>
                        <goal>verify</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <skipTests>${integration-tests.skip}</skipTests>
                <includes>
                    <include>**/*IT.class</include>
                </includes>
                <runOrder>alphabetical</runOrder>
            </configuration>
        </plugin>
    </plugins>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <version>0.8.3</version>
            </plugin>
        </plugins>
    </pluginManagement>
...
</build>

Static code analysis with Detekt

Detekt static code analysis configuration as AntRun. There’s also an unofficial Maven plugin for Detekt. It’s good to notice that there are some “false positive” findings on Detekt and you can either customize detekt rules or suppress findings if they are intentional such as @Suppress(“MagicNumber”).
Detekt code smells
(Detekt code smells)
pom.xml

<properties>
    <sonar.kotlin.detekt.reportPaths>${project.build.directory}/detekt.xml</sonar.kotlin.detekt.reportPaths>
</properties>
<build>
...
<plugins>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.8</version>
    <executions>
        <execution>
            <!-- This can be run separately with mvn antrun:run@detekt -->
            <id>detekt</id>
            <phase>verify</phase>
            <configuration>
                <target name="detekt">
                    <java taskname="detekt" dir="${basedir}"
                          fork="true"
                          failonerror="false"
                          classname="io.gitlab.arturbosch.detekt.cli.Main"
                          classpathref="maven.plugin.classpath">
                        <arg value="--input"/>
                        <arg value="${basedir}/src"/>
                        <arg value="--filters"/>
                        <arg value=".*/target/.*,.*/resources/.*"/>
                        <arg value="--report"/>
                        <arg value="xml:${project.build.directory}/detekt.xml"/>
                    </java>
                </target>
            </configuration>
            <goals>
                <goal>run</goal>
            </goals>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>io.gitlab.arturbosch.detekt</groupId>
            <artifactId>detekt-cli</artifactId>
            <version>1.0.0-RC14</version>
        </dependency>
    </dependencies>
</plugin>
</plugins>
...
</build>

Dependency checks

Dependency check with OWASP Dependency-Check Maven plugin
OWASP Dependency-Check
(OWASP Dependency-Check)
pom.xml

<properties>
    <dependency.check.report.dir>${project.build.directory}/dependency-check</dependency.check.report.dir>
    <sonar.host.url>http://localhost:9000/</sonar.host.url>
    <sonar.dependencyCheck.reportPath>${dependency.check.report.dir}/dependency-check-report.xml</sonar.dependencyCheck.reportPath>
    <sonar.dependencyCheck.htmlReportPath>${dependency.check.report.dir}/dependency-check-report.html</sonar.dependencyCheck.htmlReportPath>
</properties>
<build>
...
<plugins>
<plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <version>4.0.2</version>
    <configuration>
        <format>ALL</format>
        <skipProvidedScope>true</skipProvidedScope>
        <skipRuntimeScope>true</skipRuntimeScope>
        <outputDirectory>${dependency.check.report.dir}</outputDirectory>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>
</plugins>
...
</build>

Sonar scanner to run with Maven

<build>
...
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.sonarsource.scanner.maven</groupId>
                <artifactId>sonar-maven-plugin</artifactId>
                <version>3.6.0.1398</version>
            </plugin>
        </plugins>
    </pluginManagement>
...
</build>

Running Sonar with a Kotlin plugin

Create a SonarQube server with Docker

$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube

There’s also OWASP docker image for SonarQube which adds several community plugins to enable SAST. But for our purposes, the “plain” SonarQube works nicely.
Use the Kotlin plugin which comes with SonarQube (SonarKotlin) or install the sonar-kotlin plugin which shows information differently. If you want to use sonar-kotlin and are using the official Docker image for SonarQube then you have to first remove the SonarKotlin plugin.
Using sonar-kotlin

$ git clone https://github.com/arturbosch/sonar-kotlin
$ cd sonar-kotlin
$ mvn package
$ docker exec -it sonarqube sh -c "ls /opt/sonarqube/extensions/plugins"
$ docker exec -it sonarqube sh -c "rm /opt/sonarqube/extensions/plugins/sonar-kotlin-plugin-1.5.0.315.jar"
$ docker cp target/sonar-kotlin-0.5.2.jar sonarqube:/opt/sonarqube/extensions/plugins
$ docker stop sonarqube
$ docker start sonarqube

Adding dependency-check-sonar-plugin to SonarQube

$ curl -JLO https://github.com/SonarSecurityCommunity/dependency-check-sonar-plugin/releases/download/1.2.1/sonar-dependency-check-plugin-1.2.1.jar
$ docker cp sonar-dependency-check-plugin-1.2.1.jar sonarqube:/opt/sonarqube/extensions/plugins
$ docker stop sonarqube
$ docker start sonarqube

Run test on project and scan with Sonar

The verify phase runs your tests and should generate i.a. jacoco.xml under target/site/jacoco and detekt.xml.

$ mvn clean verify sonar:sonar

Access Sonar via http://localhost:9000/

Code quality metrics? So what?

You now have metrics on Sonar to show to stakeholders but what should you do with those numbers?
One use case is to set quality gates on SonarQube to check that a set of conditions must be met before the project can be released into production. Ensuring code quality of “new” code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis. By default, the rules are: “coverage on new code < 80%; the percentage of duplicated lines on new code > 3; maintainability, reliability or security rating is worse than A”. The default rules provide a good starting point for your projects quality metrics.

Marko Wallin

Marko works as a full stack software engineer and creates better world through digitalization. He writes technology and software development related blog and developes open source applications e.g. for mobile phones. He also likes mountain biking.

Do you know a perfect match? Sharing is caring

In part 3 of my blog series on AngularJS migration, I go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.

Preparing your Application for Migration

Before beginning to migrate it’s necessary to prepare and align your AngularJS application with Angular. These preparation steps are all about making the code more decoupled, more maintainable, and better aligned with modern development tools.

The AngularJS Style Guide

Ensure that the current code base follows the AngularJS style guide https://github.com/johnpapa/angular-styleguide/blob/master/a1/README.md. Angular takes the best parts of AngularJS and leaves behind the not so great parts. If you build your AngularJS application in a structured way using best practices it will include the best parts and none of the bad parts making migration much easier.
The key concepts of the style guide are:

  1. One component per file. Structuring components in this way will make them easier to find and easier to migrate one at a time.
  2. Use a ‘folders by feature’ structure so that different parts of the application are in their own folders and NgModules.
  3. Use Component Directives. In Angular applications are built from components an equivalent in AngularJS are Component Directives with specific attributes set, namely:
    • restrict: ‘E’. Components are usually used as elements.
    • scope: {} – an isolate scope. In Angular, components are always isolated from their surroundings, and you should do this in AngularJS too.
    • bindToController: {}. Component inputs and outputs should be bound to the controller instead of using the $scope.
    • controller and controllerAs. Components have their own controllers.
    • template or templateUrl. Components have their own templates.
  4. Use a module loader like SystemJS or Webpack to import all of the components in the application rather than writing individual imports in <script> tags. This makes managing your components easier and also allows you to bundle up the application for deployment.

Migrating to Typescript

The style guide also suggests migrating to TypeScript before moving to Angular however this can also be done as you migrate each component. Information on the recommended approach can be found at https://angular.io/guide/upgrade#migrating-to-typescript however my recommendation would be to leave any migration to Typescript until you begin to migrate the AngularJS components.

Hybrid Routers

Angular Router

Angular has a new router that replaces the one in AngularJS. Both routers can’t be used at the same time but the AngularJS router can serve Angular components while you do the migration.
In order to switch to the new built-in Angular router, you must first convert all your AngularJS components to Angular. Once this is done you can switch over to the Angular router even though the application is still hosted as an AngularJS application.
In order to bring in the Angular router, you need to create a new top-level component that has the <router-outlet></router-outlet> component in it’s template. The Angular.io upgrade guide has steps to take you through this process https://angular.io/guide/upgrade#adding-the-angular-router-and-bootstrap

Angular-UI Router

UI-Router has a hybrid version that serves both AngularJS and Angular components. While migrating to Angular this hybrid version needs to be until all components and services are migrated then the new UI-Router for Angular can be used instead.
To use the hybrid version you will first need to remove angular-ui-router (or @uirouter/angularjs)from the applications package.json and add @uirouter/angular-hybrid instead.
The next step is to add the ui.router.upgrade module to your AngularJS applications dependencies:
let ng1module = angular.module(‘myApp’, [‘ui.router’, ‘ui.router.upgrade’]);
There are some specific bootstrapping requirements to initialise the UI Hybrid Router step by step instructions are documented in the repository’s wiki https://github.com/ui-router/angular-hybrid

Implementation

Bootstrapping a Hybrid Application

In order to run AngularJS and Angular simultaneously, you need to bootstrap both versions manually. If you have automatically bootstrapped your AngularJS application using the ng-app directive then delete all references to it in the HTML template. If you are doing this in preparation for migration then manually bootstrap the AngularJS application using the angular.bootstrap function.
When bootstrapping a hybrid application you first need to bootstrap Angular and then use the upgradeModule to bootstrap AngularJS. In order to do this, you need to create an Angular application to begin migrating to! There are a number of ways to do this, the official upgrade guide suggests using the Angular Quick Start Project however you could also use the Angular CLI. If you don’t know anything about Angular versions 2 and above now is the time to get familiar with the new framework you’ll be migrating to.
Now you should have a manually bootstrapped AngularJS version and a non-bootstrapped Angular version of your application. The next step is to install the @angular/upgrade package so you can bootstrap both versions.
Run npm install @angular/upgrade –save. Create a new root module in your Angular application called app.module.ts and import the upgrade package.

import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { UpgradeModule } from '@angular/upgrade/static';
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['angularJSapp'], { strictDi: true });
 }
}

This new app module is used to bootstrap the AngularJS application, replace “angularJSapp” with the name of your AngularJS application.
Finally, update the Angular entry file (usually app.maint.ts) to bootstrap the app.module we’ve just created.
That’s it! You are now running a hybrid application. The next step is to begin converting your AngularJS Directives and Services to Angular versions. The Google walkthrough that these steps are based on can be found at https://angular.io/guide/upgrade#bootstrapping-hybrid-applications

Doing the Migration

Using Angular Components from AngularJS Code

If you are following the Horizontal Slicing method of migration mentioned earlier then you will need to use newly migrated Angular components in the AngularJS version of the application. The following examples are adapted from the official upgrade documentation for more detailed examples see https://angular.io/guide/upgrade#bootstrapping-hybrid-applications
AngularJS to Angular
Below is a simple Angular component:

import { Component } from '@angular/core';
@Component({
 selector: 'hero-detail',
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
 `
})
export class HeroDetailComponent { }

To use this in AngularJS you will first need to downgrade it using the downgradeComponent function in the upgrade package we imported earlier. This will create an AngularJS directive that can then be used in the AngularJS application.

import { HeroDetailComponent } from './hero-detail.component';
/* . . . */
import { downgradeComponent } from '@angular/upgrade/static';
angular.module('heroApp', [])
 .directive(
   'heroDetail',
   downgradeComponent({ component: HeroDetailComponent }) as angular.IDirectiveFactory
 );

The Angular component still needs to be added to the declarations in the AppModule. Because this component is being used from the AngularJS module and is an entry point into the Angular application, you must add it to the entryComponents for the NgModule.

import { HeroDetailComponent } from './hero-detail.component';
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ],
 declarations: [
   HeroDetailComponent
 ],
 entryComponents: [
   HeroDetailComponent
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });
 }
}

You can now use the heroDetail directive in any of the AngularJS templates.

Using AngularJS Component Directives from Angular Code

In most cases, you will need to use Angular components in the AngularJS application however the reverse is still possible.
AngularJS to Angular
If your components follow the component directive style described in the AngularJS style guide then it’s possible to upgrade simple components. Take the following basic component directive:

export const heroDetail = {
 template: `
   <h2>Windstorm details!</h2>
   <div><label>id: </label>1</div>
 `,
 controller: function() {
 }
};

This component can be upgraded by modifying it to extend the UpgradeComponent.

import { Directive, ElementRef, Injector, SimpleChanges } from '@angular/core';
import { UpgradeComponent } from '@angular/upgrade/static';
@Directive({
 selector: 'hero-detail'
})
export class HeroDetailDirective extends UpgradeComponent {
 constructor(elementRef: ElementRef, injector: Injector) {
   super('heroDetail', elementRef, injector);
 }
}

Now you have an Angular component based on your AngularJS component directive that can be used in your Angular application. To include it simply add it to the declarations array in app.module.ts.

app.module.ts
@NgModule({
 imports: [
   BrowserModule,
   UpgradeModule
 ],
 declarations: [
   HeroDetailDirective,
/* . . . */
 ]
})
export class AppModule {
 constructor(private upgrade: UpgradeModule) { }
 ngDoBootstrap() {
   this.upgrade.bootstrap(document.body, ['heroApp'], { strictDi: true });
 }
}

Migrating your component directives and services should now be relatively straightforward a detailed example of migrating the Angular Phone Catalogue example, which includes examples of transclusion, can be found at https://angular.io/guide/upgrade#bootstrapping-hybrid-applications
For the most part, if the AngularJS style guide has been followed then the change from component directives to components should simply be a syntax change as no internal logic should need to change. That said there are some services that are not available in Angular and so alternatives need to be found. Below is a list of some common issues that I’ve experienced when migrating AngularJS projects.

Removing $rootScope

Since $rootScope is not available in Angular, all references to it must be removed from the application. Below are solutions to most scenarios of $rootScope being used:

Removing $compile

Like $rootScope, $compile is not available in Angular so all references to it must be removed from the application. Below are solutions to most scenarios of $compile being used:

  • The DomSanitizer module from ‘@angular/platform-browser’ can be used to replace $compileProvider.aHrefSanitizationWhitelist
  • $compileProvider.preAssignBindingsEnabled(true) this function is now deprecated. Components requiring bindings to be available in the constructor should be rewritten to only require bindings to be available in $onInit()
  • Replace the need for $compile(element)($scope); by utilising the Dynamic Component Loader https://angular.io/guide/dynamic-component-loader.
  • Components will need to be re written to remove $element.replaceWith().

Conclusion

In this 3 part blog, we’ve covered the reasons for migrating, the current AngularJS landscape, migration tips and resources, methods for migration, preparing for a migration, different ways of using migrated components and common architectural changes.
The goal of this blog series was to give a comprehensive guide to anyone considering migrating from AngularJS to Angular based on my experience. Hopefully, we’ve achieved this and if your problems haven’t been addressed directly in the blog the links have pointed you in the right direction. If you have any questions please post them in the comments.
AngularJS migration is not an easy task but it’s not impossible! Good preparation and planning are key and hopefully, this blog series will help you on your way.

Sources

 
You can read part 1 of this series here: https://gofore.com/en/migrating-from-angularjs-part-1/
And you can read part 2 here: https://gofore.com/en/migrating-from-angularjs-part-2/

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring

In part 2 of my blog series on AngularJS migration, I’ll discuss the different methods for migrating an application and highlight the tools and resources that make it possible.

Tools and Resources

ngMigration Assistant

In August 2018 Elana Olson from the Angular Developer Relations team at Google announced the launch of the ngMigration-Assistant. When run this command line tool will analyse a code base and produce statistics on code complexity, size and patterns used in an app. The ngMigration Assistant will then offer advice on a migration path and preparation steps to take before beginning the migration.
The goal of the ngMigration Assistant is to supply simple, clear, and constructive guidance on how to migrate an application. Here is some example output from the tool:

Complexity: 86 controllers, 57 AngularJS components, 438 JavaScript files, and 0 Typescript files.
  * App size: 151998 lines of code
  * File Count: 943 total files/folders, 691 relevant files/folders
  * AngularJS Patterns:  $rootScope, $compile, JavaScript,  .controller
Recommendation
Please follow these preparation steps in the files identified before migrating with ngUpgrade.
  * App contains $rootScope, please refactor rootScope into services.
  * App contains $compile, please rewrite compile to eliminate dynamic feature of templates.
  * App contains 438 JavaScript files that need to be converted to TypeScript.
      To learn more, visit https://angular.io/guide/upgrade#migrating-to-typescript
  * App contains 86 controllers that need to be converted to AngularJS components.
      To learn more, visit https://docs.angularjs.org/guide/component

The ngMigration Assistant tool is a great place to start when considering migrating an AngularJS project. The statistics and advice it gives will help quantify the effort the migration will take and can highlight particular patterns that will need to be addressed. Be warned that the tool doesn’t cover everything and there will be additional areas of the application external libraries and some logic for example that will need reworking during migration. It’s a good first step but not comprehensive.

ngMigration Forum

The ngMigration Forum gathers together resources, guides and tools for AngularJS migration. The forum allows developers to ask questions and get answers on their migration problems, it also collates all the common issues that occur during migration.

The angular.io Upgrade Guide

The angular.io Upgrade Guide contains a number of examples and walkthroughs on how to proceed with an AngularJS migration. Written by the Angular team the guide addresses the most common cases and has a complete example of migrating the Phone Catalogue example application.

Deciding How to Migrate

There are 3 major approaches to migrating an AngularJS application to Angular.

Complete Rewrite in Angular

The first decision to make when considering migrating your Angular application is whether you will do it incrementally or not. If you need to support an existing application or the application is too large to fully migrate in a reasonable timeframe then an incremental upgrade may be the only path open. However, if the application is small enough or if you are able to stop supporting the existing application or allocate enough resources then a complete rewrite is usually the most straightforward approach.
Migrate the whole application without supporting the AngularJS version:
Pros

  • You don’t have to worry about upgrading or downgrading components
  • No interoperability issues between AngularJS and Angular
  • Opportunity to refactor areas of the code
  • Can benefit from Angular features immediately

Cons

  • The application will be offline during the migration or you will need to copy the code base to a new repository
  • You don’t see the benefits until the whole application is migrated which could take some time depending on the overall size
  • Since you will not see the whole application running until the end of the migration you may discover issues as you build more features

Hybrid Applications

ngUpgrade

ngUpgrade is an Angular library that allows you to build a hybrid Angular application. The library can bootstrap an AngularJS application from an Angular application allowing you to mix AngularJS and Angular components inside the same application.
I will go into more detail on the ngUpgrade library in Part 3: Implementing the Migration but for now, it’s important to know that ngUpgrade allows you to upgrade AngularJS directives to run in Angular and downgrade Angular components to run in AngularJS.

Horizontal Slicing

When migrating using a Hybrid approach there are two methods that will gradually move your application from AngularJS to Angular. Each has its advantages and disadvantages which I’ll discuss next.
Horizontal Slicing is a term used to describe the method of migrating building block components first (low-level components like user inputs, date pickers etc) and then all components that use these components and so on until you have upgraded the entire component tree.
migration routes Image: Victor Savkin
The term references the way that components are migrated in slices cutting across the whole application.
Pros

  • The application can be upgraded without any downtime
  • Benefits are realised quickly as each component is migrated

Cons

  • It requires additional effort to upgrade and downgrade components

Vertical Slicing

Vertical Slicing describes the method of migrating each route or feature of the application at a time. Unlike horizontal slicing views won’t mix AngularJS and Angular components instead each view will consist entirely of components from one framework or the other. If services or components are shared across the application then they are duplicated for each version.
Image: Victor Savkin
Pros

  • The application can be upgraded while in production
  • Benefits are gained as each route is migrated
  • You don’t have to worry about compatibility between AngularJS and Angular components

Cons

  • It takes longer to migrate a route so benefits aren’t seen as quickly as horizontal slicing
  • Components and services may need to be duplicated if required by AngularJS and Angular versions

Effort Needed to Migrate

Which method you adopt depends entirely on your business objectives and size of the application. In most cases, I’ve found that the hybrid approach is required and more often than not I’ve used vertical slicing during the migration. Maintaining a single working application at all times has always been a priority in my experience. Since the applications have also been very large the cleanest way to organise the migration across multiple teams has been to split the application up either by feature or by route and migrate each one in turn.
The amount of effort required again depends on your particular circumstances (size of the code base, number of people etc). I’ve found that putting everyone to work on migration at once leads to confusion and in turn wasted effort. What I’ve found is that by having a small team begin the work, bootstrap the hybrid application and produce some migrated components and services the rest of the team spends left effort getting started and instead begins scaling out the migration.

Part 3: Implementing the Migration

In part 3 I’ll go into fine detail on what code changes need to happen in preparation for the migration and how the actual migration is done.

Sources

You can read part 3 of this series here: https://gofore.com/en/migration-from-angularjs-part-3/
You can read part 1 of this series here: https://gofore.com/en/migrating-from-angularjs-part-1/

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring

First, what do we mean when we talk about maintenance?

We make a lot of custom-made solutions, systems, applications and services to our customers. These projects can last anywhere from a few weeks to a few years, but they do usually have a specific goal and an expiration date. However amazing the final product is, and however much we’ve learned from creating it, the product will only start bringing value to our customers after it’s gone live. This is when we enter the maintenance phase.
Software maintenance offers the customer technical support, incident management and bug fixes, plus change management and further development to their existing live product. We want to guarantee that our super amazing product keeps being super amazing and does not simply fall into decay after it’s gone live. This is a matter of pride to all of us: quality in Gofore’s project delivery even after the project has been delivered.

software maintenance meeting

How would you prepare for a marathon?

A software project typically has a beginning, includes various steps taken to create the desired product, and finally, it comes to an end. You might find yourself tempted to think that the end of the project signifies the end of the software company’s work. However, the final release of the development project is the starting gun for the software maintenance phase.
It is part of our expertise at Gofore that, at the very start of a project, we explain to the customer that we should be making plans for when the product goes live and what happens after that. You wouldn’t run a marathon without practising and training for it. The maintenance phase can last for years after the product has gone live! For example, projects usually have multiple waypoints or sprint goals during the development phase. Maintenance should be included in this thinking – not just as a single point or event to reach, but as a natural and continuous extension of the development work.

Have you ever felt like…?


Not to worry, we have a solution for you: a centralized service desk and organized software maintenance.
While we who work with software maintenance daily are very excited about our great services, the most common thing we’ve heard in the past is that “only creating new products and services through coding is fun and exciting,” while maintenance is sometimes seen as a boring routine or an ungrateful chore that no one wants to do.
If you view maintenance this way, you probably aren’t up to speed with the latest news from the world of Service Desks. Maintenance in the year 2019 looks very different from even just a few years back, and it keeps evolving at a fast pace. Robotics and automation already take care of those boring routine tasks. The first line no longer just escalates tickets or parrots, “Have you tried turning it off and on again?” Those days are history.
At Gofore, our Service Center consists of specialists who resolve the complex issues the customers couldn’t solve themselves. As all the products we create are custom-made, maintaining them requires deep understanding and knowledge of a multitude of systems, programming languages and infrastructures. Service management, i.e. the maintenance phase equivalent for project management, is also known to increase its importance in the next few years. Service management and software maintenance require more and more expertise and specialized people year after year.

Don’t just take our word for it…

Here are some thoughts from our developers:

“Maintenance tasks improve your problem-solving skills, out-of-the-box thinking, social skills, and increase familiarity with the architecture. Participating in software maintenance is beneficial to all developers.” – Antti Simonen
“Software maintenance offers unique insight into the application’s issues and gives you a chance to make the customer happy. The quality of the code is continuously improved by maintaining your own applications.” – Petri Sarasvirta
“Understanding the application from someone else’s perspective enables you to write code that can be maintained more easily. One of the best ways to gain a better grasp of the big picture is through software maintenance.” – Antti Peltola

Our biggest supporters are our customers. Every month the people who do software maintenance at Gofore receive 5-star reviews from our very happy customers!

Actionable steps to success!

Here are some things to consider if you are a software developer:

  • When you write code, write it for others, not yourself. To put it in another way: if you can’t read your own code without a 30-page manual right now, you can only imagine how impossible the task is for someone who has to find and fix a bug in it two years later.
  • Make sure your commits are sufficiently descriptive. As all Agile developers know, documentation should not be a forced burden – but it is necessary, nonetheless. Work smart, not hard, and make sure your code speaks for itself. Remember to also keep your software delivery mechanisms (CI/CD) and infrastructure (servers, firewalls, etc) sufficiently documented.
  • Have tests and monitoring in place for production. You are the expert on what needs to be monitored and how it should be done.

And some notes for the project managers in our midst:

  • Allow time for proper documentation and make sure your customer understands its value. It should be your ethical guideline that we cannot skip such an important part of the project’s delivery.
  • Make sure your project team tests and monitors things that are significant in terms of business value. You have a unique understanding of both what is important to the customer and what your team can deliver.
  • Start preparing for the production/maintenance handover well on time. The earlier you give your colleagues who work with maintenance a heads-up, the better they can help you make the transition as smooth as possible.

software maintenance meeting at GoforeValue for the customer

Continuous services guarantee that the custom-made system, application or service works as planned throughout its lifecycle. Stability and quality in continuous services are a matter of honour and pride to our service managers. We are seasoned professionals and know how to navigate and translate between the development team and the customer, making sure all parties understand each other.
We meet customer expectations by proactively offering new solutions and further development, keeping in mind improving the customer’s business. Continuous services free the customer’s resources from maintenance to their own business. Finally, the most important thing we offer is peace of mind – the customer simply raises a ticket describing their concerns, and our Service Center swiftly takes care of the rest.

What’s in it for me?

So, you might be wondering, “What’s in it for me?” To sum up, keeping the maintenance phase in mind has its benefits…

  • …for sales, longer lasting customer relationships
  • …for developers, doing maintenance makes you “harder, better, faster, stronger”
  • …for project managers, less stress about moving to production
  • …for our customers, stability, quality and peace of mind

Ella Lopperi

As Head of Continuous Services at Gofore, Ella is responsible for nurturing the expert community, as well as for operations and strategy. She values open communication, empathy and transparency, and believes these values are key to both great employee and customer experience. Outside of work, Ella can be found reading, playing videogames, singing, writing... or simply immersing herself in the wonders of the Universe.

Linkedin profile

Jenna Salo

Jenna works as the Continuous Services Lead and a Service Manager. Providing her customers with peace of mind is the guiding principle for Jenna's everyday work. Work culture is also dear to her heart. In her spare time, Jenna is the humble servant of two chihuahuas, and yankee cars and circle skirts light a fire in her soul.

Linkedin profileTwitter profile

Do you know a perfect match? Sharing is caring

Gofore has recently completed a number of large scale AngularJS migration projects. During these projects, we’ve gathered a lot of information on the whole Angular migration process from the motivation to migrate, down to the finer technical details of the migration itself.
The purpose of this blog is to catalogue this information and offer guidance to anyone that is considering migrating. Part 1 will focus on the reasons to migrate while part 2 will detail the tools and techniques available when doing the migration and the final part will focus on the migration itself in detail.

What do we mean by AngularJS and Angular?

As multiple versions of Angular were developed, differentiating between the two incompatible versions became more important. Blogs, projects and discussions had to establish which version of Angular they were compatible with.
To reduce confusion the Angular Team suggested a naming convention, AngularJS would refer to any 1.x version of Angular and these versions came before the major rewrite that resulted in Angular 2. Any version from 2.0 up would simply be referred to as Angular.

Why did Google decide to make such substantial changes to Angular?

As AngularJS grew in popularity and was being used for bigger and bigger applications, developers started to notice performance issues. In an interview in 2018 Stephen Fluin, Developer Advocate for Angular at Google looked back at the reasons why Angular was built and said:

“There were millions of AngularJS developers and millions of AngularJS apps. The cracks started showing. If you wrote an AngularJS app the wrong way and had thousands of things on the screen, it ended up getting very slow. The architecture was just not designed with this kind of large-scale usage in mind.”

As Google started to address the growing concerns in the development community they came to the realisation that revolutionary rather than evolutionary changes were needed, Stephen Fluin in the same article goes on to say:

“The Team realized that there wasn’t an easy path to make AngularJS what it needed to be. And that’s why Angular was born. We moved from, for example, a controller and template model into a more component-based model. We added a compilation step that solved whole categories of errors that people would make in AngularJS.”

Although Google continued to support AngularJS it was clear that the future would be focused on Angular and a concerted effort was made to encourage developers to move to the new platform.

What’s the Current AngularJS Landscape?

It’s difficult to quantify how many active AngularJS applications are currently in production however, in January 2018 Pete Bacon Darwin, AngularJS Lead Developer at Google stated:

“In October of 2017, the user base of Angular passed 1 million developers (based on 30 day users to our documentation), and became larger than the user base of AngularJS.”

From this, we can deduce that up until October 2017 AngularJS had a million active developers which would result in a lot of AngularJS applications. Pete Bacon Darwin goes on to say:

“We will release a couple more versions this summer that includes a few important features and fixes before we move into the mode of only making security and dependency related fixes, but that is all.”

Clearly Google’s goal is to move as many applications onto Angular as possible as they move AngularJS onto legacy support. For applications that are continuing to be developed perhaps this makes sense but what about legacy applications should they be migrated? Google’s current legacy support plan runs until June 30, 2021, after this point there is a risk that security and breaking issues will no longer be patched. Migration of legacy applications that will be used beyond this date should be considered.

Why Migrate?

Performance Increase

AngularJS can be an efficient framework for small applications but as projects develop and the number of scopes and bindings increases this has a significant impact on performance. In Angular 6 a new rendering engine was introduced which substantially decreased compilation time and bundle sizes while Web Workers and server-side rendering opens up the possibility for further significant performance boosts.

Language

Angular is built in TypeScript a typed language that compiles to JavaScript. TypeScript significantly reduces runtime errors by identifying them at an early stage. When writing code identifying errors early can speed up development time and increase stability.

Mobile Support

Unlike AngularJS, Angular is built from the ground up to support development across platforms. Angular components can be reused across multiple environments reducing the amount of duplication needed to get applications running on mobile devices. This and a smaller overall memory footprint makes Angular run faster on mobile devices.

Tooling Support

The inclusion of the Angular CLI allows developers to build services, modules and components quickly by utilising templates. This frees developers up to focus on building or improving on new features rather than writing boilerplate code.

Structure

AngularJS provides a flexible way of building applications that can quickly become unwieldy if not supported by strict coding standards. Angular imposes a structured component-based architecture to applications making building and maintaining larger applications much easier.

Data Binding

2-way data binding in AngularJS was one of the primary causes of the slowdown in larger applications. The bigger the application the more checks had to be done in each digest cycle. Angular’s change detection strategy eliminates the need to check branches where no changes have occurred significantly reducing the checks made in each digest cycle.

Updates

Google announced in July 2018 that AngularJS would enter a 3-year long time support stage. Updates would only be made if one of the following scenarios came about:

  • A security flaw is detected in the 1.7.x branch of the framework
  • One of the major browsers releases a version that will cause current production applications using AngularJS 1.7.x to stop working
  • The jQuery library releases a version that will cause current production applications using AngularJS 1.7.x to stop working.

With support ending on June 30, 2021, there is a risk that security and breaking issues will no longer be patched.
Google strongly advise migrating and have moved to a structured timed release of updates for Angular with new versions being released every 6 months. Currently, Angular is at version 7 and Google has announced a roadmap to the end of 2019 with version 9. All major releases have at least 18 months of support and there are no plans for the kind of breaking changes that happened between AngularJS and Angular.

Part 2: Tools, Resources and Methods

In part 2 I’ll discuss the different methods for migrating an application and highlight the tools and resources that make it possible.

Sources

You can read part 2 of this series here: https://gofore.com/en/migrating-from-angularjs-part-2/
You can read part 3 of this series here: https://gofore.com/en/migration-from-angularjs-part-3/

Rhys Jevons

Rhys is a Senior Software Architect with over 10 years of experience in Digital Transformation Projects in the Media, Transport and Industrial sectors. Rhys has a passion for software development and user experience and enjoys taking on complicated real-world problems.

Linkedin profile

Do you know a perfect match? Sharing is caring

Did you know that the first ever webpage was completely responsive by default? And then we broke it with overcomplicated customizations. (Accessibility, Back to the Future | Bruce Lawson | Monki Gras 2019)

A bit about Technical debt of the HTML standard

HTML was written by Tim Berners-Lee originally in 1993 but was updated to version 4.01 in 1999 (the most used version) and also based on that the XHML1.0 standard was released. I remember first wondering what’s the meaning for <div>-tag introduced in HTML3.2 after 1997. W3C (World Wide Web Consortium) abandoned XHTML after version 2.0, but used all the functions that XHTML supports when designing HTML5. Usually, you need to know what features browsers support like with CSS3 and JavaScript ES6 and the same goes with HTML5, but luckily semantic elements of HTML5 have been widely supported for many years now.

Use semantic tags to communicate meaning instead of for presentation purposes

invalid HTML tags
HTML elements are chosen by what the content is – not by their appearance. Simply put, HTML semantics are HTML tags that have a meaning.
<p> <h1> <form> <input> <textarea> <label> <select> <button> <blockquote> <q> <code> <em> <strong> <pre><sub> <sup> <table> <thead> <tfoot> <th> <td> <header> <nav> <main> <article> <section> <aside> <footer><address> <time> <data> <cite> <del> <ins> <abbr> <dfn> <figure> <figcaption> <kbd> <var> *<b> *<i>
* semantic in HTML5 (for ensuring accessibility use <em> and <strong>)
Common tags that don’t include semantic meaning in HTML are <div> and <span>. That doesn’t mean that they couldn’t have some semantic meaning in visual user experience or cognitive accessibility.
When we think about creating a webpage or a webapp we should be thinking about communicating. The visual appearance is a separately defined part of the result. A semantic HTML-page is providing meaningful information to the browser and other clients like screen readers, search engines and developers analyzing the source. So semantics go far beyond how the content looks on a page. The browser agent creates the basic visuals based on the semantics. It is common in today’s real world that the visual design needs to be something completely different from what the browsers default to. We can use technical languages, tools and libraries to get the desired visual styles. We can also just reset them to always behave in the same way. Don’t use HTML-tags in an HTML document (or layout and view of a webapp) just for their common display properties. Remember to separate that from your ideology and just think first about the semantic structure.

Prevent the most common mistakes – use CSS to modify visual appearance.

  • Don’t use <h1> – <h6> -elements for text sizing.
  • Dividing content with <div>’s says nothing about the structure or why contents are in separate containers.
  • Only <li> -tags are allowed to be the direct descendants (rendered content) of <ul> or <ol>.
  • <blockquote>, <ul> or <ol> should not be used for indentation.
  • Don’t define margins and padding (spacing) with semantic HTML e.g. <p>&nbsp;</p>.
  • <table> element represents tabular data not layout. Use <div>’s and <span>’s to implement visual layout and styling.

too many div-elements

NOTE:
Authors are strongly encouraged to view the div element as an element of last resort, for when no other element is suitable. Use of more appropriate elements instead of the div element leads to better accessibility for readers and easier maintainability for authors.
— https://www.w3.org/TR/html5/grouping-content.html#the-div-element

It is worth mentioning that you cannot escape understanding the basics by using libraries and frameworks. Even Semantic-UI – advertised as using concise HTML – still has notable ongoing issues with accessibility. Wouldn’t you think it is also misleading to be called a fullstack developer or a web developer if you don’t know how to create a basic HTML document?
css is awesome

The key to Accessibility

The first key to accessibility – and even to user experience for people with disabilities – is learning to use semantics.

As a developer, you should be more focused on writing semantic HTML documents – not CSS (and creating a proper Design System so the developers can just stop writing custom CSS). Semantic HTML improves the readability of code – also closing tags tell you what they are closing. Understanding a new codebase can be easier if it uses the standards.
Jekyll and Hyde book cover [Robert Louis Stevenson, W.A. Wiggins]

Maybe I should make sure that others can also see how I think of the content.

How about we make the web a bit more equal?

Spend a few moments during the  Global Accessibility Awareness Day on May 16th 2019 learning and sharing the basics for accessible software development.
So how do you start? All the information needed is available for everyone on the wild wild web, but one of my favourite sites for related information is  MDN Web Docs.

Joonas Kallunki

Joonas is a visually minded front-end developer constantly pursuing for new ways to reach contentment in application experience. He is interested about interactions between technology and humanity overall. His developer life is adjusted with a lots of music, exercising within nature and precious family moments. "The CSS dude can handle this".

Linkedin profileTwitter profile

Do you know a perfect match? Sharing is caring

Many of us use Windows in our daily (development) work, either by choice or forced by external factors, such as the client IT-environment or application restrictions. For years, I’ve used git bash to get around the Windows command line’s shortcomings. However, I soon discovered the awesome Cmder that was almost a Unix-like terminal replacement. But then again, why resort to a replacement, when you can have the best of both worlds? In this tutorial, you’ll learn how to install a Linux subsystem on your Windows machine (if you have never heard of that, I know, it sounds weird and potentially frightening) and after that, we’ll continue by installing Hyper.js AND zsh to have a terminal just like on any UNIX-system. Probably the best invention since the cheese slicer.

Warming up

Before installing any Linux distros for WSL, ensure that the Windows Subsystem for Linux (WSL for short) optional feature is enabled. Enabling this is straightforward, just open PowerShell as Administrator (a handy shortcut: right-click on Start menu) and run:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Restart your computer when prompted.
After you’re back on desktop, the easiest way to download your preferred Linux distribution is from the Windows Store. Search for e.g. ‘linux’ and choose your favourite. If you don’t have any preference, Ubuntu is a safe bet. The download might take a few moments, so grab a coffee here. If you have enough bandwidth for another download, this might be a good point to download Hyper from https://hyper.is.

Enter Linux

After the installation is complete, search for your newly installed Linux distro in the start menu to finish the process. On first boot, some initial installation steps are completed, so be ready to wait for a while once again.
With enough patience, you’ll be prompted to create your UNIX-account.
Tada Linux
Tada, Linux.

Installing a better terminal

If you didn’t do so already, download Hyper.js from https://hyper.is/. Hyper is an extendable terminal replacement based on Electron, and runs on any platform, so you might want to try it out on other operating systems as well. Not to forget that it’s beautiful and fully themeable. Unless you’re scared of everything that runs on Electron, of course.
Booting up Hyper.js for the first time brings you to the default Windows command prompt.
Hit

Ctrl + ,

to open the preferences in a text editor, and scroll down to ‘shell’.
We want Hyper to log in to bash instead, so change this line to

shell: `C:\\Windows\\System32\\bash.exe`

While you’re at it, you might want to change some other settings to match your preferences as well, e.g. specify your favourite colours or add a predefined theme or other plugin

plugins: ['hyper-material-theme']

All set and done! To see that bash works, open a new tab from the menu or by pressing

Ctrl + Shift + T

Now you’re halfway through. Leave the terminal open, we need it later.

Installing zsh and Oh-My-Zsh

For those not familiar with the Linux lingo, zsh is a customizable alternative shell for UNIX systems. Oh-My-Zsh builds on top of that to help you manage the preferred configuration, consisting of community-developed plugins and themes to make one’s life easier (it really does). So if you left the tab open in the previous step, call

sudo apt-get update

and

sudo apt-get install zsh

to install zsh just like any other Unix system.
Now we need to make zsh our default shell.  This needs to be set at login because as far as I know, the default login shell on WSL cannot be changed.
For that, open .bashrc in your favourite text editor

nano ~/.bashrc

And write

bash -c zsh

as the first line in order to tell bash to switch to zsh at login.
Save and close the file.
Finally, we’re ready to install Oh-My-Zsh from

sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

And you’re good to go! Just remember to review your settings in .zshrc.
displayed in a terminal
Finally, in a proper terminal
One of the first things you might notice is that npm does not work. Now that we’re living inside Linux,
the easiest option is to install Node (or nvm, if you need that) again on Linux by running

sudo apt-get install nodejs

Additionally,  in case you run into issues, you might have to make your path point to
PATH=”$HOME/bin:$HOME/.local/bin:/usr/bin:$PATH
in ~/.zshrc

Bonus round – usage in VS Code

Open settings in JSON (Ctrl + Shift + P, type settings), and add

"terminal.integrated.shell.windows": "C:\\Windows\\System32\\bash.exe"

That pretty much sums things up. Now you have made your terminal great again – on Windows!

Some useful guides to continue diving deeper

My oh-my-zsh wiki https://github.com/robbyrussell/oh-my-zsh/wiki
Installing Powerline fonts required by some themes – https://medium.com/@slmeng/how-to-install-powerline-fonts-in-windows-b2eedecace58
Making Windows native Docker work on WSL – https://medium.freecodecamp.org/how-to-set-up-docker-and-windows-subsystem-for-linux-a-love-story-35c856968991

Arno Lehtonen

Arno is a software developer based in Tampere who favours aesthetic and usable interfaces in both code and UI.

Do you know a perfect match? Sharing is caring

This is part 2 of a three-part blog series explaining how I wrote some code to control the basic features of a DJI Ryze Tello drone. I set myself this challenge ahead of a hackathon event held at our offices in Swansea, UK. In part 1 I explained how to connect to the drone and send commands to it to enable the drone to do tasks such as take off and land. You can read part 1 here. Below I take things one step forward allowing you to fly the drone in various directions.

Directional movement

Getting the drone to move in a specified direction is a very similar process to what we have already done but with one slight difference. The directional commands “left”, “right”, “forward” and “back” for the drone each allow an integer to be specified after the command, the integer is how many centimetres the drone will move in the specified direction.
First, we need to refactor our handleInput  function a little, as we will now be sending a value from 20 to 200 after some of our commands, writing a select to handle each and every possible combination we could send is a bad idea. Instead, we will use the string.startsWith  method to check that our line starts with a keyword such as “forward” or “left” and then take the amount from the end of the line using string.split.
Unfortunately, we can’t just add a boolean expression case to our switch due to the way switch works. In short, this is because it uses the strict equality operator (===) to check the value of the argument against whatever is on the right-hand side of the case keyword, this means no functions or boolean expressions will be evaluated.
Our way around this slight blocker is to go back to a good old if statement. First, let’s cater for the functionality that we already have. What I like to do here is to create simple functions that encapsulate the boolean logic that we can re-use, so I created functions like below:

function isTakeoff(line) {
    return line === "takeoff";
}
function isLand(line) {
    return line === "land";
}

The functions take in the line, and will simply return either true or false depending on how the boolean expression is evaluated.
We can then create a series of if statements using these functions and use them to execute our sendTakeoff  and sendLand  functions like below:

if(isTakeoff(line)) {
    try{
        await sendTakeOff(socket);
    catch (err) {
        console.log(err);
    }
};
if(isLand(line)) {
    try {
        await sendLand(socket);
    catch (err) {
        console.log(err);
    }
}

Once that refactoring is done the handleInput  function should look something like this

async function handleInput(line, socket) {
    function isTakeoff(line) {
        return line === "takeoff";
    }
    function isLand(line) {
        return line === "land";
    }
    if(isTakeoff(line)) {
        try{
            await sendTakeOff(socket);
        catch (err) {
            console.log(err);
        }
    };
    if(isLand(line)) {
        try {
            await sendLand(socket);
        catch (err) {
            console.log(err);
        }
    }
}

I’ve declared the isTakeoff  and isLand  functions within the scope of handleInput just for the sake of keeping everything together.
Now we can create a function to detect when we are sending a forward command to the drone. We can create our sendForward  function, this will be very similar to our sendTakeoff  and sendLand  functions that we created earlier, but this time we will add a distance parameter with a default value of 20. This is so that if a user neglects to send the command with a distance value, we can safely default it to the lowest value possible and still send a valid command to the drone.

function sendForward(socket, distance = 20) {
    return new Promise((resolve) => {
        socket.send(`forward ${distance}`,0,`forward ${distance}`.length,TELLO_CMD_PORT, TELLO_HOST, err => {
            if(err) {
                throw err;
            else {
                return resolve();
            }
        });
    });
}

Finally, we can add our if statement:

if(isForward(line)) {
    const [name, dist] = line.split(" ");
    try {
        await sendForward(socket, dist);
    catch (err) {
        console.log(err);
    }
}

Although it doesn’t look like much is going on here in terms of lines of code, there are two very important things happening. First, we are calling strings split method on our line, this function will split a string up into sections wherever a specified token is present. As we are splitting by a single white-space, and our line should be in the format like below:
The word “forward”, followed by a single white-space and then an integer. For example “forward 20”.
This split function will return us an array with two elements inside like this:  [“forward”,”20”].
flying the dji tello drone
As you can see from the example, the first element, in the 0 position of the array, will be “forward” and the second element, in the 1st position will be “20”. We are then using array de-structuring here to assign the first element in the array to the variable name and the second element in the array to the variable dist. The rest is simple, we then call our sendForwardfunction with the socket and dist as arguments. The command should then be sent to the drone.
Let’s give it a shot, fire up the application using:

> node ./src/app.js

Once we are up and running, issue the “takeoff” command to the drone. Once the take off sequence is complete we can then instruct the drone to move forward by issuing a command like below:

> forward 20

We should then see the drone move in a forward direction.
To add the other directions we can just repeat the above steps, but instead of sending “forward” we just need to send either “back”, “left” or “right”.
The output you should see in the terminal:
terminal screenshot

Summary

I can’t say I ever imagined that I would be writing code to control a drone, certainly not JavaScript code anyway, but I’m sure glad I did. While admittedly the real-life applications of the code above are little to non-existent, there’s actually quite a lot of scope for growth with this little project and is certainly a nice way to test out some skills and even learn some more.
A nice way to expand this would be to create some form of UI that’s not the command line because let’s be fair, typing commands is hardly practical. Perhaps taking the basic concepts I’ve touched on here and wrapping them in some form of Electron or React/Vue/Angular app. Even getting the stream of data that’s available from the drone and creating some sort of visualizations from it would be an interesting coding challenge. I will explore some of these in part 3.
All of the code I’ve written here (and some extras) can be found in my GitHub https://github.com/csscottc/drone-ctrl – Feel free to check it out and use it however you see fit. Part 1 of this blog series can be found here.
In part 3 I will explain how to build a simple UI that will complete the original task of allowing a non-coder the ability to control the drone.

Scott Carpenter

A Certified Scrum Master (Scrum Alliance), Scott is a Senior Software Developer based in the Gofore UK office in Swansea. Passionate about the JavaScript family of technologies (node/React/Angular) and very much enjoys creating awesome apps that run on the client or the server. Scott is also very interested in cloud computing, specifically Amazon Web Services and Google Cloud as well as microservices.

Do you know a perfect match? Sharing is caring