Keep Maven dependencies up to date

Software development projects usually come with lots of dependencies and keeping them up to date can be burdensome if done manually. Fortunately, there are tools to help you. For Node.js projects, there are e.g. npm-check and npm-check-updates and for Maven projects there are OWASP/Dependency-Check and Versions Maven plugins. Here’s a short introduction on how to set up your Maven project to automatically check dependencies for vulnerabilities and if there are outdated dependencies.

OWASP/Dependency-Check

OWASP dependency-check is an open source solution the OWASP Top 10 2013 entry: ”A9 – Using Components with Known Vulnerabilities”.
A dependency-check can currently be used to scan Java and .NET applications to identify the use of known vulnerable components. The dependency-check plugin is, by default, tied to the verify or site phase depending on if it is configured as a build or reporting plugin.
The example below is executed in the build’s verify phase and can be run using mvn verify:

<project>
     ...
     <build>
         ...
         <plugins>
             ...
<plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <version>5.0.0-M3</version>
    <configuration>
        <failBuildOnCVSS>8</failBuildOnCVSS>
        <skipProvidedScope>true</skipProvidedScope>
        <skipRuntimeScope>true</skipRuntimeScope>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>
            ...
         </plugins>
         ...
     </build>
     ...
</project>

The example fails the build for CVSS greater than or equal to 8 and skips scanning the provided and runtime scoped dependencies.

Versions Maven Plugin

The Versions Maven Plugin is the de facto standard way to manage versions of artefacts in a project’s POM. From high-level comparisons between remote repositories up to low-level timestamp-locking for SNAPSHOT versions, its massive list of goals allows us to take care of every aspect of our projects involving dependencies.
The example configuration of versions-maven-plugin:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>versions-maven-plugin</artifactId>
    <version>2.7</version>
    <configuration>
        <allowAnyUpdates>false</allowAnyUpdates>
        <allowMajorUpdates>false</allowMajorUpdates>
        <allowMinorUpdates>false</allowMinorUpdates>
        <processDependencyManagement>false</processDependencyManagement>
    </configuration>
</plugin>

You could use goals that modify the pom.xml as described in the usage documentation but often it’s easier to check versions manually as you might not be able to update all of the suggested dependencies.
The display-dependency-updates goal will check all the dependencies used in your project and display a list of those dependencies with newer versions available.
Check new dependencies with:

mvn versions:display-dependency-updates

Check new plugin versions with:

mvn versions:display-plugin-updates

Summary

Using OWASP/Dependency-Check in your Continuous Integration build flow to automatically check dependencies for vulnerabilities and running periodically Versions Maven Plugin to check if there are outdated dependencies helps you to keep your project up to date and secure. Small but important things to remember while developing and maintaining a software project.

Avatar

Marko Wallin

Marko Wallin työskentelee ohjelmistosuunnittelijana Goforella ja muuttaa maailmaa paremmaksi digitalisaation keinoin. Hänellä on vuosien kokemus ohjelmistokehityksestä, ketteristä menetelmistä ja ohjelmoinnista sekä käyttöliittymän, taustapalveluiden että tietokantojen osalta. Vapaa-ajallaan Marko jakaa teknistä osaamistaan blogiensa kautta ja kehittämällä muun muassa avoimen lähdekoodin mobiilisovelluksia. Sovelluskehityksen lisäksi hän harrastaa maastopyöräilyä.

Piditkö lukemastasi? Jaa se myös muille.

A business model is known as a plan of how to execute an organization’s strategy. It can also be a summary of an organization’s business logic. Understanding the target domain is often required in order to model a business, for example, to gain a common understanding where the participants are a cross-functional team. To model these plans or summaries there are many tools, templates or frameworks available. In many cases, they use graphics to create a more understandable way to ensure that different perspectives are considered.
One of these templates which is easy to access is called the Business Model Canvas (BMC). An objective of the BMC is to create a commonly known way to clearly picture the ontology around the target domain. The BMC offers you pre-defined building blocks to divide your business domain or a single product/service – it works in a very scalable way because it fits both small and larger target domains. The BMC consists of nine building blocks which contain points of view from a value proposition, customer thinking, business infrastructure and economics.
Business model canvas
 
icon The Value proposition building block is in the middle of the canvas. The value proposition defines the value or benefits customers get from using the organization’s products or services. For example, the service’s value proposition is to provide a faster and reliable connection between users.
iconThe Customer segments building block defines the customer groups for whom value will be produced. Customer segments can be sorted with different criteria, for example, according to age, county or industry.
icon Customer relationships answers questions such as: what kind of relationships the organization has with different customer segments and how are they maintained. For example, how to create relationships with new customers, how to maintain existing customer-relationship or how to develop relationships with potential customers in the future.
icon  The Channels building block represents all the defined ways to reach customers or how the value propositions are distributed for customers. For example, an item bought from an online shop will be delivered via the post office, but a local store can sell the item locally.
icon  The Key resources building block defines all the resources required to produce and provide a value proposition. Resources can be divided for example according to the material (human, IT-infrastructure) and immaterial (patents, brand) resources.
icon  The Key activities building block answers the question: what are the main tasks or functions to deliver the value proposition. For example, activities can be divided according to manufacturing lines, services, or problems to be solved.
icon The Key partners building block defines the most important stakeholders to complete all the necessary key activities.
icon  The Revenue streams building block defines the price for the value proposition. Different types of prices can be defined according to the customer segment.
icon The Cost structure defines all the costs of the activities to produce and achieve the value proposition. For example, these can be costs of marketing, distributing and manufacturing. Cost structures can be sorted with fixed and variable costs.
One developed version of the BMC is called Service Logic Business Model Canvas, that especially points out customer thinking in these building blocks. For example, the value proposition considers what are the problems that customers are about to solve with a product or a service. Or what are the specific features that specific customer segments are looking for from the value proposition.

Towards a common understanding

Designing with graphical tools helps to understand the target domain from a different starting point. For instance, project members from different sectors might see the result in a very different way. The BMC-template offers a fast and effective way to begin brainstorming and especially to collect and compare ideas. All you need is a printed BMC-template and lots of Post-it notes. Collecting something concrete on a wall makes it possible to begin discussions with each other and promote the business model to the next level.
Finally, a couple of tips for effective teamwork with the BMC:

  • By using different sizes and different colours of post-it notes you can highlight or easily split ideas into groups
  • You may use multiple notes to write down words or paint pictures: tell a story while placing them onto the canvas (good stories are remembered for a long time)
  • Be open-minded while having conversations about ideas – it’s dangerous to fall in love with your own ideas.
  • It might be necessary to go through multiple rounds before getting a satisfying solution. It’s important to evaluate iteratively which ideas work better and which ones do not.
  • It might be a good idea to split a larger group of participants into smaller groups and compare their outputs – finally collect the best ideas together.

post-its
 
Downloadable BMC-template:
http://www.businessmodelgeneration.com/downloads/business_model_canvas_poster.pdf
Example of using Service Logic Business Model Canvas:
https://pdfs.semanticscholar.org/8be1/75561b64ad8172cc7d5a0859da9c9460bda8.pdf
Sources:
Business Model Generation: A Handbook For Visionaries, Game Changers, And Challengers (2010) http://businessmodelgeneration.com/book?_ga=1.248258951.1869965398.1467892617
Osterwalder, Alexander et al. ”The business model ontology: A proposition in a design science approach.” (2004). http://www.hec.unil.ch/aosterwa/PhD/Osterwalder_PhD_BM_Ontology.pdf

Antti Luoma

Antti Luoma

Antti Luoma works as a service architect at Gofore. He has a Master's degree in Philosophy from the University of Eastern Finland, studying computer science. Antti is an expert in the comprehensive use of architectural descriptions in business model design and project management.

Linkedin profile

Piditkö lukemastasi? Jaa se myös muille.

What could be more annoying than committing code changes to a repository and noticing afterwards that the formatting isn’t right or tests are failing? Your automated tests on Continuous Integration shows rain clouds and you need to get back to the code and fix minor issues with extra commits polluting the git history? Fortunately, with small enhancements to your development workflow, you can automatically prevent all the hassle and check your changes before committing them. The answer is to use Git hooks for example on pre-commit for running linters and tests.

Git Hooks

Git hooks are scripts that Git executes before or after events such as commit, push, and receive. They’re a built-in feature and run locally. Hook scripts are only limited by a developer’s imagination. Some example hook scripts include:

  • pre-commit: Check the commit for linting errors.
  • pre-receive: Enforce project coding standards.
  • post-commit: Email team members of a new commit.
  • post-receive: Push the code to production.

Every Git repository has a .git/hooks folder with a script for each hook you can bind to. You’re free to change or update these scripts as necessary, and Git will execute them when those events occur.
Git hooks can greatly increase your productivity as a developer as you can automate tasks and ensure that your code is ready to commit or pushing to a remote repository.
For more reading about Git hooks, you can check missing Git hooks documentation, read the basics and check tutorial how to use Git hooks on local Git clients and Git servers.

Pre-commit

One productive way to use Git hooks is a pre-commit framework for managing and maintaining multi-language pre-commit hooks. Read tips for using a pre-commit hook.
Pre-commit is nice for example running linters to ensure that your changes conform to coding standards. All you need is to install pre-commit and then add hooks.
Installing pre-commit, ktlint and pre-commit-hook on MacOS with Homebrew:

$ brew install pre-commit
$ brew install ktlint
$ ktlint --install-git-pre-commit-hook

For example, the pre-commit hook to run ktlint with auto-correct option looks like the following in projects .git/hooks/pre-commit. The ”export PATH=/usr/local/bin:$PATH” is for SourceTree to find git on MacOS.

#!/bin/sh
export PATH=/usr/local/bin:$PATH
# https://github.com/shyiko/ktlint pre-commit hook
git diff --name-only --cached --relative | grep '\.kt[s"]\?$' | xargs ktlint -F --relative .
if [ $? -ne 0 ]; then exit 1; else git add .; fi

The main disadvantage is using pre-commit and local git hooks is that hooks are kept within .git directory and it never comes to the remote repository. Each contributor will have to install them manually in his local repository which may be overlooked.

Maven projects

Githook Maven plugin deals with the problem of providing hook configuration to the repository and automates their installation. It binds to Maven projects build process and configures and installs local git hooks.
It keeps a mapping between the hook name and the script by creating a respective file in .git/hooks for each hook containing given script in Maven project’s initial lifecycle phase. It’s good to notice that the plugin rewrites hooks.
Usage Example:

<build>
    <plugins>
	<plugin>
	    <groupId>org.sandbox</groupId>
	    <artifactId>githook-maven-plugin</artifactId>
	    <version>1.0.0</version>
	    <executions>
	        <execution>
	            <goals>
	                <goal>install</goal>
	            </goals>
	            <configuration>
	                <hooks>
	                    <pre-commit>
	                         echo running validation build
	                         exec mvn clean install
	                    </pre-commit>
	                </hooks>
	            </configuration>
	        </execution>
	    </executions>
	</plugin>
    </plugins>
</build>

Git hooks for Node.js projects

On Node.js projects, you can define scripts in package.json and run them with npm which enables another approach to run Git hooks.
🐶 Husky is Git hooks made easy for Node.js projects. It keeps existing user hooks, supports GUI Git clients and all Git hooks.
Installing Husky is like any other npm library

npm install husky --save-dev

The following configuration on your package.json runs lint (e.g. eslint with –fix) command when you try to commit and runs lint and tests (e.g. mocha, jest) when you try to push to a remote repository.

"husky": {
   "hooks": {
     "pre-commit": "npm run lint",
     "pre-push": "npm run lint && npm run test"
   }
}

Another useful tool is lint-staged which utilizes husky and runs linters against staged git files.

Summary

Make your development workflow easier by automating all the things. Check your changes before committing them with pre-commit, husky or Githook Maven plugin. You get better code and commit quality for free and your team is happier.

Avatar

Marko Wallin

Marko Wallin työskentelee ohjelmistosuunnittelijana Goforella ja muuttaa maailmaa paremmaksi digitalisaation keinoin. Hänellä on vuosien kokemus ohjelmistokehityksestä, ketteristä menetelmistä ja ohjelmoinnista sekä käyttöliittymän, taustapalveluiden että tietokantojen osalta. Vapaa-ajallaan Marko jakaa teknistä osaamistaan blogiensa kautta ja kehittämällä muun muassa avoimen lähdekoodin mobiilisovelluksia. Sovelluskehityksen lisäksi hän harrastaa maastopyöräilyä.

Piditkö lukemastasi? Jaa se myös muille.

Harness the power of the mob

group of goforeans
Software development shouldn’t be an isolated island with little outside contact. However, communication between a development team and external stakeholders isn’t always easy, especially if their sole shared moments happen during sprint rituals. When themes such as transparency or communication come up in retrospectives, a team should consider incorporating new ways of knowledge sharing. One such way is mob programming.
Our development team was at a crossroads of sorts. With some new members and the departure of others, we felt the need to synchronize, since our customer has multiple products of varying maturity and technology stacks. At the same time, their own developers had begun to work more closely with us, so overall there was some confusion over how to get the most out of our team. It was in one retrospective, where communication and visibility were discussed, that a developer suggested mob programming as a possible solution. Despite none of us having tried the method before, we decided to give it a spin in the next sprint.

Working simultaneously

Mob programming extends the principles of pair programming to an entire team. In a mob session, attendees work simultaneously on the same problem, using only one computer. While the implementations of mob programming vary, at its simplest there are only three different roles: the driver, the navigator and the mobber. The driver is doing all the coding, but they should not make independent decisions. They listen to the guidance of the navigator, who in turn discusses with the mobbers. The navigator should give as detailed instructions as necessary, considering the background of the driver, while the mobbers follow the progress via big displays. All work happens in timed cycles, with the roles rotating in between. In our case, this meant that a product owner found themselves writing code, and a UX-designer pushed changes to a remote repository.
We started with our scrum master as a facilitator. He took care of the practicalities and made sure the team was committed to trying out the new method. We set up a development environment in a conference room and started working towards our goal. It was a slow start, but little by little, code started to appear. Whenever we hit an impasse, we took a breather and made sure we were heading in the right direction. At the end of the day, we had laid promising groundwork for a new feature. Satisfied with the results, we booked time for another mob session in our next sprint planning.

Exceeding expectations

In the long run, the effects of mob programming have exceeded our expectations. In addition to getting actual work done, the whole team has become closer. Everyone despite their level of expertise has learned new things about our architecture, best practices, handy IDE shortcuts and more. The sessions have been so well received, that we began doing them regularly. The whole team, the customer included, is invited. While on paper it may seem wasteful to frequently assign multiple people to work on one item, mob programming can induce widespread benefits:

  • Instant communication between team members, since no time is spent waiting for answers or sending emails back and forth. Communicating face-to-face reinforces good teamwork practices and the mentality of giving and receiving help. Kindness and respect will be on the rise.
  • Improved decision making. The possibility to discuss problems and pitfalls together reduces overall reluctance to make decisions. This collaboration increases the individuals’ commitment to the results.
  • Reduced waste, especially if the product owner is included in the sessions. The team can get instant feedback on their progress, discarding unwanted additions.
  • Improved code quality. Having more than one set of eyes reviewing the code reduces technical debt. Best practices easily spread across the team, and even previously unidentified technical debt can be recognized.
  • Reduced context switching, since the whole team becomes familiar with the feature being worked on. Mob programming can also shine a light on dependencies between different parts of the software, increasing overall familiarity of the architecture.

Ultimately, mob programming is a tool I can recommend to any team, whether they’re facing issues or not. When starting out, decide the topics beforehand and get everyone up to speed with the goals. This increases commitment reduces anxiety and maximizes your time available for the actual work. Once your session is underway, make sure that people remain in their roles. Experienced developers might be tempted to drive ignoring or predicting the navigator’s instructions, a pattern that should be prevented. Once your team’s confidence grows, you can start tweaking the parameters such as the roles or timing to better suit your needs. You can also try applying this method outside programming, for example in UI-design or user story creation. Have fun firing up your own mob!

Anssi Juvonen

Anssi Juvonen

Anssi is a 'Gofore people person' and a full-stack developer with over ten years of experience building software solutions. He is currently working with Vue.js in a front-end focussed role.

Linkedin profile

Piditkö lukemastasi? Jaa se myös muille.

Code quality in a software development project is important and a good metric to follow. Code coverage, technical debt and vulnerabilities in dependencies are a couple of things you should follow. There are some de facto tools you can use to visualize things and one of them is SonarQube. Here’s a short technical note of how to set it up on a Kotlin project and visualize metrics from different tools. We are using Detekt for static source code analysis and OWASP Dependency-Check to detect publicly disclosed vulnerabilities contained within project dependencies.

Visualizing Kotlin project metrics on SonarQube

SonarQube is a nice graphical tool to visualize different metrics of your project. Lately, it has also started to support Kotlin with the SonarKotlin and sonar-kotlin plugins. From a typical Java project, you need some extra settings to get things working. It’s also good to notice that the support for Kotlin isn’t quite yet there and the sonar-kotlin provides better information i.e. what comes to code coverage.
Steps to integrate reporting to Sonar with maven build:

  • Add configuration in project pom.xml: Surefire, Failsafe, jaCoCo, Detekt, Dependency-Check
  • Run Sonar in Docker
  • Maven build with sonar:sonar option
  • Check Sonar dashboard

SonarQube project overview
(SonarQube project overview)

Configure a Kotlin project

Configure your Kotlin project built with Maven to have test reporting and static analysis. We are using Surefire to run unit tests, Failsafe for integration tests and JaCoCo to generate reports for e.g. SonarQube. See the full pom.xml from an example project (coming soon).

Test results reporting

pom.xml

<properties>
<sonar.coverage.jacoco.xmlReportPaths>${project.build.directory}/site/jacoco/jacoco.xml</sonar.coverage.jacoco.xmlReportPaths>
</properties>
<build>
    <plugins>
        <plugin>
            <groupId>org.jacoco</groupId>
            <artifactId>jacoco-maven-plugin</artifactId>
            <executions>
                <execution>
                    <id>default-prepare-agent</id>
                    <goals>
                        <goal>prepare-agent</goal>
                    </goals>
                </execution>
                <execution>
                    <id>pre-integration-test</id>
                    <goals>
                        <goal>prepare-agent-integration</goal>
                    </goals>
                </execution>
                <execution>
                    <id>jacoco-site</id>
                    <phase>verify</phase>
                    <goals>
                        <goal>report</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <artifactId>maven-surefire-plugin</artifactId>
            <configuration>
                <skipTests>${unit-tests.skip}</skipTests>
                <excludes>
                    <exclude>**/*IT.java</exclude>
                    <exclude>**/*IT.kt</exclude>
                    <exclude>**/*IT.class</exclude>
                </excludes>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-failsafe-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>integration-test</goal>
                        <goal>verify</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <skipTests>${integration-tests.skip}</skipTests>
                <includes>
                    <include>**/*IT.class</include>
                </includes>
                <runOrder>alphabetical</runOrder>
            </configuration>
        </plugin>
    </plugins>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>2.22.1</version>
            </plugin>
            <plugin>
                <groupId>org.jacoco</groupId>
                <artifactId>jacoco-maven-plugin</artifactId>
                <version>0.8.3</version>
            </plugin>
        </plugins>
    </pluginManagement>
...
</build>

Static code analysis with Detekt

Detekt static code analysis configuration as AntRun. There’s also an unofficial Maven plugin for Detekt. It’s good to notice that there are some ”false positive” findings on Detekt and you can either customize detekt rules or suppress findings if they are intentional such as @Suppress(”MagicNumber”).
Detekt code smells
(Detekt code smells)
pom.xml

<properties>
    <sonar.kotlin.detekt.reportPaths>${project.build.directory}/detekt.xml</sonar.kotlin.detekt.reportPaths>
</properties>
<build>
...
<plugins>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>1.8</version>
    <executions>
        <execution>
            <!-- This can be run separately with mvn antrun:run@detekt -->
            <id>detekt</id>
            <phase>verify</phase>
            <configuration>
                <target name="detekt">
                    <java taskname="detekt" dir="${basedir}"
                          fork="true"
                          failonerror="false"
                          classname="io.gitlab.arturbosch.detekt.cli.Main"
                          classpathref="maven.plugin.classpath">
                        <arg value="--input"/>
                        <arg value="${basedir}/src"/>
                        <arg value="--filters"/>
                        <arg value=".*/target/.*,.*/resources/.*"/>
                        <arg value="--report"/>
                        <arg value="xml:${project.build.directory}/detekt.xml"/>
                    </java>
                </target>
            </configuration>
            <goals>
                <goal>run</goal>
            </goals>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>io.gitlab.arturbosch.detekt</groupId>
            <artifactId>detekt-cli</artifactId>
            <version>1.0.0-RC14</version>
        </dependency>
    </dependencies>
</plugin>
</plugins>
...
</build>

Dependency checks

Dependency check with OWASP Dependency-Check Maven plugin
OWASP Dependency-Check
(OWASP Dependency-Check)
pom.xml

<properties>
    <dependency.check.report.dir>${project.build.directory}/dependency-check</dependency.check.report.dir>
    <sonar.host.url>http://localhost:9000/</sonar.host.url>
    <sonar.dependencyCheck.reportPath>${dependency.check.report.dir}/dependency-check-report.xml</sonar.dependencyCheck.reportPath>
    <sonar.dependencyCheck.htmlReportPath>${dependency.check.report.dir}/dependency-check-report.html</sonar.dependencyCheck.htmlReportPath>
</properties>
<build>
...
<plugins>
<plugin>
    <groupId>org.owasp</groupId>
    <artifactId>dependency-check-maven</artifactId>
    <version>4.0.2</version>
    <configuration>
        <format>ALL</format>
        <skipProvidedScope>true</skipProvidedScope>
        <skipRuntimeScope>true</skipRuntimeScope>
        <outputDirectory>${dependency.check.report.dir}</outputDirectory>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>check</goal>
            </goals>
        </execution>
    </executions>
</plugin>
</plugins>
...
</build>

Sonar scanner to run with Maven

<build>
...
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.sonarsource.scanner.maven</groupId>
                <artifactId>sonar-maven-plugin</artifactId>
                <version>3.6.0.1398</version>
            </plugin>
        </plugins>
    </pluginManagement>
...
</build>

Running Sonar with a Kotlin plugin

Create a SonarQube server with Docker

$ docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube

There’s also OWASP docker image for SonarQube which adds several community plugins to enable SAST. But for our purposes, the “plain” SonarQube works nicely.
Use the Kotlin plugin which comes with SonarQube (SonarKotlin) or install the sonar-kotlin plugin which shows information differently. If you want to use sonar-kotlin and are using the official Docker image for SonarQube then you have to first remove the SonarKotlin plugin.
Using sonar-kotlin

$ git clone https://github.com/arturbosch/sonar-kotlin
$ cd sonar-kotlin
$ mvn package
$ docker exec -it sonarqube sh -c "ls /opt/sonarqube/extensions/plugins"
$ docker exec -it sonarqube sh -c "rm /opt/sonarqube/extensions/plugins/sonar-kotlin-plugin-1.5.0.315.jar"
$ docker cp target/sonar-kotlin-0.5.2.jar sonarqube:/opt/sonarqube/extensions/plugins
$ docker stop sonarqube
$ docker start sonarqube

Adding dependency-check-sonar-plugin to SonarQube

$ curl -JLO https://github.com/SonarSecurityCommunity/dependency-check-sonar-plugin/releases/download/1.2.1/sonar-dependency-check-plugin-1.2.1.jar
$ docker cp sonar-dependency-check-plugin-1.2.1.jar sonarqube:/opt/sonarqube/extensions/plugins
$ docker stop sonarqube
$ docker start sonarqube

Run test on project and scan with Sonar

The verify phase runs your tests and should generate i.a. jacoco.xml under target/site/jacoco and detekt.xml.

$ mvn clean verify sonar:sonar

Access Sonar via http://localhost:9000/

Code quality metrics? So what?

You now have metrics on Sonar to show to stakeholders but what should you do with those numbers?
One use case is to set quality gates on SonarQube to check that a set of conditions must be met before the project can be released into production. Ensuring code quality of “new” code while fixing existing ones is one good way to maintain a good codebase over time. The Quality Gate facilitates setting up rules for validating every new code added to the codebase on subsequent analysis. By default, the rules are: ”coverage on new code < 80%; the percentage of duplicated lines on new code > 3; maintainability, reliability or security rating is worse than A”. The default rules provide a good starting point for your projects quality metrics.

Avatar

Marko Wallin

Marko Wallin työskentelee ohjelmistosuunnittelijana Goforella ja muuttaa maailmaa paremmaksi digitalisaation keinoin. Hänellä on vuosien kokemus ohjelmistokehityksestä, ketteristä menetelmistä ja ohjelmoinnista sekä käyttöliittymän, taustapalveluiden että tietokantojen osalta. Vapaa-ajallaan Marko jakaa teknistä osaamistaan blogiensa kautta ja kehittämällä muun muassa avoimen lähdekoodin mobiilisovelluksia. Sovelluskehityksen lisäksi hän harrastaa maastopyöräilyä.

Piditkö lukemastasi? Jaa se myös muille.

Using version control is an essential part of modern software development and using it efficiently should be part of every developer’s tool kit. Knowing the basic rules makes it even more useful. Here are some best practices that help you on your way.
tl;dr;

  1. Commit logical changesets (atomic commits)
  2. Commit Early, Commit Often
  3. Write Reasonable Commit Messages
  4. Don’t Commit Generated Sources
  5. Don’t Commit Half-Done Work
  6. Test Before You Commit
  7. Use Branches
  8. Agree on a Workflow

Commit logical changesets (atomic commits)

A commit should be a wrapper for related changes. Make sure your change reflects a single purpose: the fixing of a specific bug, the addition of a new feature, or some particular task. Small commits make it easier for other developers to understand the changes and roll them back if something went wrong.
Your commit will create a new revision number which can forever be used as a “name” for the change. You can mention this revision number in bug databases, or use it as an argument to merge should you want to undo the change or port it to another branch. Git makes it easy to create very granular commits.
So if you do many changes to multiple logical components at the same time, commit them in separate parts. That way it’s easier to follow changes and their history. So working with features A, B and C and fixing bugs 1, 2 and 3 should make at least 6 commits.

Commit Early, Commit Often

It is recommended to commit code to version control often which keeps your commits small and, again, helps you commit only related changes. It also allows you to share your code more frequently with others.
It’s easier for everyone to integrate changes regularly and avoid having merge conflicts. Having a few large commits and sharing them rarely, in contrast, makes it hard to solve conflicts.

“If the code isn’t checked into source control, it doesn’t exist.”
Coding Horror

Write Reasonable Commit Messages

Always write some reasonable comment on your commit. It should be short and descriptive and tell what was changed and why.
Begin your message with a short summary of your changes (up to 50 characters as a guideline). Separate it from the following body by including a blank line.
It is also useful to add some prefix to your message like Fix or Add, depending on what kind of changes you did. Use the imperative, present tense (“change”, not “changed” or “changes”) to be consistent with generated messages from commands like git merge.
If fixing some bug or making some feature and it has a JIRA ticket, add the ticket identifier as a prefix.
For example: “ISSUE-123 Fix bugs in the dropdown component for selecting items.” or “ISSUE-1234 Fix bad allocations in image processing routines”
Not like this: “Fixed some bugs.”
The body of your message should provide detailed answers to the following questions: What was the motivation for the change? How does it differ from the previous implementation?

“If the changes you made are not important enough to comment on, they probably are not worth committing either.”
loop label

Don’t Commit Generated Sources

Don’t commit files which are generated dynamically or which are user dependent. Like target folder or IDEA’s .iml files or Eclipse’s .settings and .project files. They change depending on what the user likes and don’t relate to the project’s code.
Also, the project’s binary files and Javadocs are files that don’t belong to version control.

Don’t Commit Half-Done Work

You should only commit code when it’s completed. Split the feature’s implementation into logical chunks and remember to commit early and often. Use branches or consider using Git’s Stash feature if you need a clean working copy (to check out a branch, pull in changes, etc.).
On the other hand, you should never leave the office without committing your changes to a branch (on remote repository).

“It’s better to have a broken build in your local working repository on a branch than a working build on your broken hard drive.”

Test Before You Commit

You should only commit code which is tested and passes tests. And this includes code formatting with linters. Write tests and run tests to make sure the feature or bug fix really is completed and has no side effects (as far as one can tell).
Having your code tested is even more important when it comes to pushing/sharing your code with others.

Use Branches

Branching is one of Git’s most powerful features – and this is not by accident: quick and easy branching was a central requirement from day one. Branches are the perfect tool to help you avoid mixing up different lines of development.
You should use branches extensively in your development workflows: for new features, bug fixes and ideas.

Agree on a Workflow

Git lets you pick from a lot of different workflows: long-running branches, topic branches, merge or rebase, git-flow.
Which one you choose depends on a couple of factors: your project, your overall development and deployment workflows and (maybe most importantly) on your and your teammates’ personal preferences. However you choose to work, just make sure to agree on a common workflow that everyone follows.
Atlassian has done a good article of comparing workflows to suit your needs and covers centralized, feature Branch, git flow and forking workflows.
Simplified Git Flow(source: https://buildazure.com/2018/02/21/introduction-to-git-version-control-workflow/)

Summary

Using version control is usually and fortunately acknowledged best practice and part of software development. By using even a couple of the above practices makes working with the code much more pleasant. Adopting at least “Commit logical changesets” and “Reasonable Commit Messages” helps a lot.

Avatar

Marko Wallin

Marko Wallin työskentelee ohjelmistosuunnittelijana Goforella ja muuttaa maailmaa paremmaksi digitalisaation keinoin. Hänellä on vuosien kokemus ohjelmistokehityksestä, ketteristä menetelmistä ja ohjelmoinnista sekä käyttöliittymän, taustapalveluiden että tietokantojen osalta. Vapaa-ajallaan Marko jakaa teknistä osaamistaan blogiensa kautta ja kehittämällä muun muassa avoimen lähdekoodin mobiilisovelluksia. Sovelluskehityksen lisäksi hän harrastaa maastopyöräilyä.

Piditkö lukemastasi? Jaa se myös muille.

Voittajakulttuurien uudet KPI:t


Forbesin mukaan 52% maailman 500 suurimmasta yrityksestä on 15 viime vuoden aikana kadonnut ja yritysten eliniän ennuste vuonna 2027 on enää 12 vuotta. Perinteisiin johtamistapoihin jumittuminen on tuhoisaa.

Johtamisen mallia voi hakea todellisten voittajatiimien, kuten Yhdysvaltojen armeijan SEALS-erikoisjoukkojen, Pixarin innovaatiotiimien tai parhaiden urheilujoukkueiden toiminnasta. Coylen (2018) mukaan niillä on kolme perustehtävää: ne luovat yksilölle turvallisen kasvualustan, panostavat merkittävästi yhteenkuuluvuuden luomiseen ja mahdollistavat luottamuksen ilmapiirin, jossa kukin yksilö voi olla ja menestyä omana haavoittuvana itsenään.
Aika kaukana perinteisistä liiketoiminnan mittareista?

Yksilön kasvu

Keinoälyn ja robotisaation myötä inhimillisen pääoman, kuten innovoinnin, tunteiden ja sosiaalisten taitojen, merkitys kasvaa. Rutiinien erinomaisestakaan hoidosta ei ole enää kilpailueduksi. Menestyjät panostavat siihen, että jokainen yksilö voi uudistua ja uudistaa. Organisaatioon rakennetaan oppimisen kulttuuri, jossa jokainen saa oppia turvallisesti ilman pelkoa haavoittumisesta.
Valta ja vastuu siirretään tiimeille ja yksilölle, alaspäin organisaatiossa. Tiimit voivat päättää itse asioistaan ja niillä on lupa epäonnistua, mutta niillä on myös vastuu oppia jokaisesta epäonnistumisesta. Johtajuudesta tulee palvelua – jatkossa tiimit ehkä jopa valitsevat johtajansa kuhunkin projektiin vahvuuksien, erityisosaamisten tai kompetenssien pohjalta.
Johtajia tarvitaan tukemaan ihmisten kasvua ja näyttämään suuntaa, ei käskemään.

Yhteenkuuluvuus

Yrityksellä tulee olla selkeä tarkoitus ja suunta. Ilman sitä on turha vouhottaa itseohjautuvuudesta – se mahdollistuu vain, kun jokainen osaa vastata kysymyksiin: Miksi olemme olemassa? Miten meistä tuli me? Miksi pitää muuttua? Miten mennään eteenpäin? Mikä rooli yksilöllä on osana kokonaisuutta?
Ketterissä organisaatioissa, esimerkiksi Googlella, toteutetaan onnistuneesti yhteisöllistä päätöksentekoa (social descision making) uudenlaisten yksilöllisten mittarien (OKR = Objectives & Key Results) avulla, perinteisten KPI-suorituskykymittarien sijasta. Aito uudistuminen syntyy koko organisaation yhteisen työn ja sen osien tuloksena, ei kulmahuoneen sanelemana.
Johtajalle tämä merkitsee isoa muutosta. Faktojen luetteleminen ei riitä. On osattava sekä kertoa asioiden merkityksestä että edistää organisaation yhteenkuuluvuuden tunnetta ja itseohjautuvuutta olemalla itsekin läsnä. Näyttää suuntaa, inspiroida, priorisoida ja tehdä päätöksiä silloin, kun niitä tarvitaan.

Luottamus

Uuden ajan organisaatioissa ihmisiin luotetaan. Yksilöiden nähdään ajavan kaikille yhteisiä tavoitteita. He edustavat kestävää kilpailukykyä tukevia kyvykkyyksiä ja voimavaroja, he eivät ole pelkkiä resursseja, kulueriä tai vahdittavia riskejä.
Hyvän nähdään tuovan hyvää, onnistumisten kasvualustan uusille onnistumisille.
Keskiössä on ihmissuhteiden toimivuus ja avoimuus. Ihmisten välisen syvän luottamuksen syntymiseen panostetaan, sillä inhimillinen pääoma on tärkeintä pääomaa.
Kun nyt mietit oman organisaationne kulttuuria, toimintatapoja, arvoja, missiota ja visiota, niin miten mielestäsi edelliset kolme keskeisintä voittajakulttuurien toimintatapaa toteutuvat? Tuleeko aika- ja kustannuspaineissa säästettyä juuri siitä, mikä voisi menestykseen johtaa? Oletko luottamuksen arvoinen? Onko sanoillasi merkitystä?

Uskotko sinä muutokseen? Siihen, että voit muuttaa maailmaa paremmaksi ihmisille ja ympäristölle?
Tutustu uuteen julkaisuumme ja asiantuntijoidemme näkemyksiin: Recoding change

Jere Talonen

Jere Talonen

Jere työskentelee Goforella johtamis- ja palvelukulttuurin kehittämisen konsulttina. Hänellä on liiketoiminnan johtoryhmätason kokemusta globaaleista kuluttajabrändeistä yli 20 vuotta, yhdeksästä maasta ja kolmelta mantereelta. Hän on myös kokenut ekosysteemien ja verkostojen rakentamisen startup-yrittäjä.

Linkedin profile
Avatar

Riikka Jakovuori

Riikka on Goforen toimintakulttuurikonsultoinnista vastaava johtava konsultti. Ennen Goforea Riikka johti Accenturen markkinointia ja viestintää ja oli Suomen johtoryhmän jäsen sekä työskenteli muun muassa F-Securessa liiketoiminnan kehittämisen johtotehtävissä. Riikka on toiminut myös mobiilisovelluksiin keskittyvän startupin Appedin yhtenä avainhenkilönä ja osakkaana. Riikka on myös sertifioitu coach.

Piditkö lukemastasi? Jaa se myös muille.

Tekoälyn hyödyntäminen ei ole niin vaikeaa kuin kuvitellaan. Liikkeelle pääsee melko raa´allakin datalla. Vaadittavat työmäärätkin ovat kohtuullisia. Totutusta poikkeavaa asennetta se kyllä vaatii.

Tekoäly on oppiva organismi. Teknologia, joka yhdistelee tietoja ja mukauttaa toimintaansa oppimansa mukaisesti. Sellaista ei voi vetää pakasta. Toisen, samojenkin haasteiden kanssa painivan, organisaation datasta oppinut tekoäly ei todennäköisesti sovi sinun organisaatiosi järjestelmäympäristöön sellaisenaan.
Tekoälyn oppiva luonne on meille uutta. Olemme tottuneet määrittelemään tavoitteet ennalta. Varmistelemaan. Tekoälyä hyödyntäessäsi huomaat pian, että ennalta määrättyjen tuotosten sijaan arvokkaimmat löydökset syntyvät ikään kuin sivutuotteina.
Oppiva teknologia kertoo kehittäjilleen asioita, joita emme tienneet.

Tekoälyn kolme perusperiaatetta

Tekoälyn onnistunut soveltaminen kiteytyy kokemukseni mukaan seuraaviin periaatteisiin.

  1. Lopputulosta ei voi määritellä etukäteen. Tekoälyprojekteissa ratkotaan eri lähteistä kootun datan avulla ongelmia, joita ei vielä koskaan aiemmin ole ratkottu – ja usein jopa ennennäkemättömillä tavoilla. Tästä syystä projektin alkaessa kukaan ei voi olla varma siitä, millainen lopputulos on, milloin se on valmis (tai tuleeko se koskaan valmiiksi) ja mitä se maksaa. Vanhat hankkimistavat ennalta määrättyjen suunnitelmien, business casejen ja kiinteiden tarjousten pohjalta joutavat romukoppaan. Olen itsekin ollut ratkomassa tekoälyn avulla ongelmia, joita ei projektin alkaessa edes tiedetty olevan olemassa.
  2. Teknologian sijaan on kyse halukkuudesta ja kyvykkyydestä luoda uutta. Tekoälyprojektin voi aloittaa olemassa olevan prosessin tehostamisesta, mutta huomattavasti suurempi potentiaali on kokonaan uusissa innovaatioissa. Eivät LED-valaisimetkaan syntyneet tuunaamalla kynttilää!
  3. Tekoäly on yhteinen asia. Tekoälyn keskeisin potentiaali on yhdistellä eri organisaation osista tulevaa dataa, jota ei aikaisemmin ole voitu hyödyntää. Siksi sitä ei missään tapauksessa kannata hyödyntää ainoastaan perinteiseen pistemäiseen, tulosyksikkökohtaiseen kehittämiseen. Arvokkaimmat löydökset ovat usein sellaisia, joita kukaan ei osannut ennakoida.

Kokeiluilla matkaan, tekoäly opettaa

Liikkeelle kannattaa lähteä mieluummin pienin kokeiluin kuin panostamalla yhteen megaluokan hankkeeseen. Näin ensimmäiset tulokset saadaan jopa parissa viikossa, riippuen datan valmisteluun kuluvasta ajasta.
Tekemistä arvioidaan koko ajan. Päivän tai viikon löydökset vaikuttavat seuraaviin töihin. Ne voivat muuttaa koko projektin suunnan. Etenemisessä luotetaan tekemisen kautta karttuvan ymmärryksen voimaan. Ensin tehdään ja sitten arvioidaan: mikä onnistui tai epäonnistui ja miksi, mitä tehdä seuraavaksi?
Oppiminen on kokeilemista. Ilman kokeiluja on täysin mahdotonta arvioida tekoälyn tuottamaa potentiaalista hyötyä.
Tekoälyn hyödyntämistä pelätään minusta suotta. Tietosuojaan ja juridiikkaan liittyvät asiat tai pelko oman datan laadun riittämättömyydestä saattavat estää aloittamista. Tekoälyn hyödyntäminen ei ole ollenkaan niin kallista ja hankalaa kuin kuvitellaan. Melko raakakin data riittää aloittamiseen. Jos olette jo digitalisoineet perusprosesseja, estettä tekoälykokeilujen tekemiselle ei ole.
Aloita jo tänään, odottamalla et voita mitään!
 
Uskotko sinä muutokseen? Siihen, että voit muuttaa maailmaa paremmaksi ihmisille ja ympäristölle?
Tutustu uuteen julkaisuumme ja asiantuntijoidemme näkemyksiin: Recoding change

Pasi Lehtimäki

Pasi Lehtimäki

Pasi on johdon konsultti, jonka sydän sykkii analytiikalle. Hän auttaa asiakkaita hyödyntämään dataa ja analytiikkaa digitalisoituvassa toimintaympäristössä. Hän kehittää Goforen analytiikkatarjoamaa ja innovatiivista, analytiikkaosaajien yhteisöä.

Piditkö lukemastasi? Jaa se myös muille.