Introduction 

The X-Road Security Server Sidecar is a Docker container optimized as a provider or consumer X-Road Security Server for being deployed next to an Information System. The Security Server Sidecar is intended to be running in the same context (virtual host, Kubernetes cluster, etc.) where the Information System is running. The containerized approach makes running the Security Server more cost-effective and better suited for environments where Information Systems exchanging data using X-Road are already running in containers. The Security Server Sidecar was originally developed for the Finnish Digital Agency and is published as free MIT-licenced open source component through X-Road data exchange platform managed by the Nordic Institute for Interoperability Solutions (NIIS). 

Installation 

With Sidecar, the Security Server installation process becomes much simpler than setting it up in dedicated server hardware, requiring only an existing installation of Docker on a Linux platform. Windows and MacOS are not officially supported, but they may be used for test and/or development purposes. 

The X-Road ecosystem member can run one of the several Security Server Sidecar images published on NIIS Dockerhub repository. Some user-defined parameters are required, such as X-Road database and Admin UI credentials and software token PIN, to ensure the configuration for the Security Server Sidecar running on the container is unique. During the first run of the Security Server Sidecar container, the entrypoint script generates unique internal and admin UI TLS keys and certificates and configures custom admin credentials and software token PIN code for the user.  

The Security Server Sidecar provides the option to configure an external database instead of the default local one by providing the remote database name, port and superuser credentials as parameters. It also supports a variety of cloud databases including AWS RDS and Azure Database for PostgreSQL. This deployment option is useful when the cloud-native database is the preferred choice. 

Sidecar images 

 The X-Road ecosystem member can make use of different Security Server Sidecar Docker image versions. Each image version installs a custom set of pre-built X-Road Security Server modules. Depending on the image version they want to use, it will install the basic X-Road Security Server modules or additional ones. For example, the 6.25.0 version of the Security Server Sidecar includes message log, operational monitoring, and environmental monitoring modules, whereas the 6.25.0-slim version does not include them. In both cases, the Security Server Sidecar can be used for both consuming and producing services, although the slim version is recommended for the service consumer role. Additionally, there are some country-specific configuration versions available, such as the 6.25.0-fi version including the Finnish meta-package configuration (currently the only one). 

Deployment options 

One of the advantages of using the Security Server Sidecar is that it runs alongside the client’s or service’s Information System on the same host but in a separate container. Security Server Sidecar container can serve one or more Information Systems on the same cluster. Later, the Security Server Sidecar can be scaled up and down independently from the Information System in the cluster to accommodate the fluctuation of requests. However, in this deployment scenario, the footprint of the Sidecar container is relatively high compared to the footprint of average containers and it must be taken into consideration for dimensioning the cluster size appropriately. 

 When Security Server Sidecar is run in a production system, it’s not acceptable to have a single point of failure. Fortunately, the Security Server supports high-availability configuration via internal load balancing mechanism. This configuration is natively supported. For the purpose, user needs to configure several Security Server Sidecar containers with the same combination of member/member class/member code/subsystem/service code. The X-Road Central Services will then route the request to the Security Server Sidecar container that responds the fastest. 

 

External database and volumes 

Another benefit of using the Security Server Sidecar is that it can use either a local database running inside the container or a remote database running externally. Since the Security Server is a stateful application, it is strongly recommended to configure the Sidecar container to use volumes and external database to persist information outside the Security Server Sidecar container in a production environment so that the Security Server Sidecar configuration is not lost when the container is destroyed. Docker volumes allow using the same configuration for several Security Server Sidecar containers, making it possible to keep the configuration even if the Security Server Sidecar container is removed or updated. The Security Server Sidecar can easily be updated by creating a backup, running the image with the new version, and restoring the backup or reusing the volume with the previous configuration. A major version update may require changes, so before updating the Security Server Sidecar, it is always advisable to check the specific version release notes for the version update. 

 

Security 

From a security point of view, Docker guarantees that applications running on containers are completely isolated from each other. However, running the Security Server Sidecar in a Docker container has some security risks derived from the separation of the application layer and the infrastructure layer. The user should carefully review some of the Docker security best practices for securing the Security Server Sidecar container. A comprehensive Security guide can be found on the Security Server Sidecar documentation, which describes the most relevant recommendations to avoid common security pitfalls. 

To avoid unrelated services or containers running on the same host to reach the Security Server Sidecar, a user-defined bridge network should be employed, allowing only containers attached to that network to communicate with each other. It is also strongly recommended to store configuration files with sensitive information into volumes outside the Security Server Sidecar container. 

 

More information about sidecar:

Finnish Digital and Population Data Services Agency: A new highly requested sidecar option

Raul Martinez

Raul Martinez

Raul is a Software Engineer with more than 10 years of international experience in Software Consultancy firms around Europe with experience in Software Architecture, Systems Integration, and Project Management. Besides managing the operations of Gofore Spain, Raul works in Digital Transformation projects with the Finnish Digital Agency and Nordic Institute for Interoperability Solutions (NIIS) on the X-Road secure data exchange platform. Enthusiast of applying new technologies to make the world a better place bit by bit. Tireless traveler loves to meet people from different cultures and enjoys practicing his skills in food and music in his spare time.

Linkedin profile

Do you know a perfect match? Sharing is caring

Deploying public cloud platforms is effortless and fast. Even beginners can achieve visible results quickly — a virtual machine only takes a few dozen seconds to set up. Just give your credit card details and get started on your project, and what you don’t know yet, you can easily learn as you go along, right?

However, ease of use conceals risks. Platform providers may invest heavily in issues such as security, but novices can easily wind up building an insecure environment. Expertise is also needed when choosing a solution for each purpose: reserved but unused capacity can lead to unnecessary costs.

Granting full freedom of action to projects can backfire later, by making them difficult to manage or by raising costs. Environments paid for with plastic and built hastily around project needs may include needless overlapping solutions.

Build safely on a solid foundation

As in construction projects, a safe and stable foundation guarantees a firm basis on which you can build, and provides opportunities for extensions. Repairing a foundation retrospectively can be laborious and incur unnecessary costs.

However, nothing is set in concrete at the beginning of a cloud project — the configuration can be updated as use expands. On the other hand, you should design the basic components related to e.g. account structures, network connection and authentication at the very start, in order to move forward on a firm basis. Role-based access and user management provides clarity and improves security. Well-designed network structures within, as well as outside, the cloud enhance security and boost intuitiveness. Management of confidentiality should also be clearly planned and communicated.

Where necessary, certain cloud services can be excluded from normal use — few users need the computing power of supercomputers, for example. However, such services will certainly accrue costs. In many cases, there are also good grounds for restricting the geographical location. It’s often best to start by opting for the users’ local region, particularly if it offers a sufficiently broad service portfolio. You can select a certain single region or, for example, the EU/EEA.

As use expands, matters tend to arise such as monitoring solutions of various kinds, log management, and increasing and ensuring fault tolerance.

An expert partner will help you make the right choices and define the basic principles. There is certainly no need to spend weeks poring over plans, and you can also make sure that the cloud foundation complies with the best practices recommended by platform providers.

Cloud Foundation or Landing Zone?

It’s easy to get lost in the terminology jungle. Different platforms may use slightly different terms for a cloud foundation, but they nevertheless mean the same thing. In the most straightforward cases, the provider offers a ready-made framework on which a cloud foundation can be built from code.

This means that design of the foundation does not in any way hinder the project from starting, but ensures smooth work and efficient resource use in the future.

For larger projects, it also makes sense to consider setting up your own cloud-focused competence centre to provide projects with support and expertise to ensure efficient use of the cloud.

Controlled expansion, efficient operation

A sensible and controlled basis also provides opportunities to expand into new areas; completely new solutions can be built on the cloud foundation, or existing ones transferred there. Centralised cost management enables cost optimisation and ensures overall visibility.

Centralised Devops practices to streamline product development and a highly automated cloud architecture minimise the need for manual work, ensuring efficient and modern cloud-based operations. Automatic recovery from faults is no longer the stuff off science fiction.

Towards more sustainable, genuine benefits

Thanks to public cloud platforms, ICT architectures can be built in hours or days, rather than weeks of months. However, the pace must allow for taking time to get some of the basics right and thereby guarantee efficient and secure use of the cloud in the future. Even in the cloud, security, scalability and ease of operation are not intrinsic values, but a well-designed foundation can help to ensure that you genuinely gain from a cloud-based environment.

 


Do you want to get the most out of cloud services?

Take a step towards your goal by signing up for our free GTalks webinar “Good basics of Cloud 11.3.2021 (11:00 to 12:30 EET).

INFORMATION AND SIGN UP HERE

Jussi Puustinen

Jussi Puustinen

Jussi Puustinen runs the Cloud & DevOps unit at Gofore and he is an IT professional who loves the outdoors. Creating continuous customer value is close to Jussi’s heart. He has solid experience within the entire life cycle of IT services for more than 10 years - from strategic planning to implementations. He thrives on helping customers take advantage of new technologies effectively.

Linkedin profile

Do you know a perfect match? Sharing is caring

There is a lot of debate these days about cloud service management, management models and Governance in general. However, when you talk to customers about them, it seems that many people have quite different ideas about the terms and concepts.

For some, a management model is a technical document that describes how a cloud service has been built in, say, Azure, and how it should be operated, what features the cloud service provides, and how the environment is maintained.

Others think it’s a description of all the activities run around cloud services. What the responsibilities are, how your Cloud Center of Excellence is manned and what support functions are required for a holistic approach based on the cloud.

We at Gofore help our customers at every stage of their cloud journey, so I’ll share some of my experiences of the topic.

Today’s reality

Many Finnish companies and organisations have adopted a variety of cloud services. Microsoft 365 may be used by the entire organisation, while marketing may use HubSpot, the Data and AI team is crunching away data in Azure, and some developer may have decided to add an important test environment to Google Cloud. They’re all saying that their environments are under control and everything’s working just fine. But is it really? This is a common situation these days, and fewer and fewer organisations have common rules for operating the above services.

So what do you need such rules for?

Cloud services attract plenty of interest in organisations, everyone seems to have an opinion, and sometimes people can’t see the wood from the trees. Opinions are presented as facts when talking about data security or data protection, services are ordered without proper processes, and invoicing is done easily using credit cards. Doesn’t seem like a particularly solid foundation.

Although we at Gofore love technology, especially the opportunities it presents, we want to help our customers create the best possible basis for sustainable utilisation and scaling.  Cloud governance model should also take account of all the boring non-technical aspects required to make day-to-day operations and business run smoothly.

Basis of a good cloud governance model

A good cloud governance model should define which cloud services are used, how they are used and maintained, and how new products and services are developed. Are all of the company’s services managed by the company, or by a supplier or maybe a Cloud Center of Excellence that can flexibly use the necessary resources wherever they are available? What help is available to the business when considering a new service or requiring technical expertise for a project? How to ensure the data security and data protection of services, what are the ground rules? How to launch cloud services so that day-to-day operations will continue smoothly even after the project?

We all too often see cases in which only the technical issues are addressed and everything’s fun until the developers are transferred to the next interesting project. The organisation’s IT department may be faced with an impossible task if the application requires further development or more demanding changes.

Our firm opinion is that a cloud governance model must include all responsibilities, practices and processes related to cloud service development and maintenance. Support is needed from idea to production, to ensure that everything works well.

From theory to practice

It’s never too late to start, because a good cloud governance model will be crucial for many organisations in the future.

Aspects we underline when helping our customers to build a good operating model:

  • Clear responsibilities
    • What will the customer be responsible for, and what will the supplier take care of and be allowed to decide – a suitable combination of power and responsibility
  • Organisation
    • How to organise things around cloud services? How do projects gain assistance with technical or architectural questions? How will projects be developed to be ready for launch?
  • Common set of rules for data security and data protection
    • Cloud services have differences in terms of, for example, logging and monitoring; how do you ensure that all services are covered by the same rules and are sensibly managed?
  • Cloud service maintenance
    • Architecture and technology are important, but so are the processes and practices surrounding them. Where are instructions located, which tools are used, which support model is applied to applications and how do you cooperate with other partners?

 


Do you want to get the most out of cloud services?

Take a step towards your goal by signing up for our free GTalks webinar “Good basics of Cloud 11.3.2021 (11:00 to 12:30 EET).

The webinar speakers will be Gofore’s leading cloud consultants Jussi Puustinen and Joonas Vuorela, author of this post. GTalks will be hosted by Tiia Hietala who is responsible for Gofore’s cloud partnerships and trainings.

INFORMATION AND SIGN UP HERE

Joonas Vuorela

Joonas Vuorela

Joonas Vuorela works at Gofore as a leading ICT consultant and is motivated by helping customers with ICT infrastructures, cloud services, and better cloud management. According to Joonas, nothing is (professionally) better than guiding customers towards a more sustainable and inspiring path with cloud technologies.

Linkedin profile

Do you know a perfect match? Sharing is caring

Fifty shades of cloud

Cloudy skies in black and white

Everyone has heard of cloud services, some even know what they are, and almost everyone has a fairly strong opinion about them. Opinions seem to be very divided on cloud services.

Some experts are very sceptical about the cloud. It is considered vague and, above all, unreliable in many respects. Cloud services are ‘out there somewhere’, are operated and processed by ‘whoever’, and are vulnerable to network connections being down. Many organisations question whether key services can be moved to the cloud, while others still ban the use of cloud services altogether. This approach can be regarded as an unwritten “no cloud” strategy.

The other extreme is made up of cloud groupies. The cloud is viewed as an attractive, all-purpose technology to which all services should be moved immediately, and then only used from there. This ‘cloud technology groupie’ line is equivalent to an undocumented and unaccepted ‘cloud only’ strategy.

Because these two very opposing views often clash in the same organisation, the latter should outline its approach to cloud services — a documented, carefully considered and widely accepted cloud vision and cloud strategy are needed.

A cloud or just hot air?

When discussing cloud services, the first problem tends to be that different people have different views of what cloud services are.  For some, ‘cloud’ means ‘anything to do with IT from outside our data centre.’ Many are beginning to take a strong stand on whether AWS virtual platforms or development tools are better than MS Azure. Some recall that systems are perhaps being acquired on an SaaS basis.

When developing a cloud service policy, it is a good idea to take a stand on cloud services — well-established and highly productised cloud services with large customer bases, which are highly adaptable to customer needs without prior commissioning. Not every virtual platform run by a supplier is a cloud service.

Discussions about cloud services are often limited to cloud platforms. However, the Finnish public administration’s cloud guidelines include the sound principle of having a cloud policy that covers all cloud operating models — IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service) and BPaaS (Business Process as a Service). Good cloud policies cover all of these.

From black and white to shades of grey — the smart approach

Cloud services are here to stay. Ruling them out recalls the negative attitude of gaslight experts to electricity — it won’t work and it’s probably dangerous. For example, off-the-shelf software has moved, or is actively moving, almost exclusively to the cloud. In practice, new software of this type is no longer being developed for setup by the customer. Over the next few years, it may even become difficult to find locally installed software that meets operational needs. So the issue is no longer whether or not to use cloud services, but how to use them securely and benefit from them.

It is time to let go of the ‘all or nothing’ attitude to cloud services. Cloud service users now adopt the so-called Cloud Smart approach, assessing the suitability of cloud services on a case-by-case basis. Most services can be moved to the cloud, but some cannot due to regulations or the need for continuity.

A cloud strategy and detailed development path in support of change

Define your organisation’s very own strategy for benefiting from cloud services. Draw up an overview of all the ways in which your organisation can leverage cloud services. On what grounds and based on what policies will cloud services be bought or developed, and with what aims? Explore all cloud service models (IaaS, PaaS, SaaS and BPaaS) and the entire cloud solution life cycle.

A cloud vision and strategy will provide an excellent, jointly agreed main model for leveraging cloud services in your organisation. However, you will need more than a strategy to realise the benefits of cloud services. Draw up a systematic, comprehensive and measurable roadmap for developing cloud service capabilities (expertise, technology, management models, instructions and procurement procedures).

Be bold, document the process in detail, accept and commit.

 


Do you want to get the most out of cloud services?

Take a step towards your goal by signing up for our free GTalks webinar “Good basics of Cloud 11.3.2021 (11:00 to 12:30 EET).

INFORMATION AND SIGN UP HERE

Mika Karjalainen

Mika Karjalainen

Mika Karjalainen is developer and ultimate consultant for Gofore's Capability and Ecosystem Framework. He has been involved in building hundreds of success stories with our customers. Mika believes that there is always a good time for renewal.

Linkedin profile

Do you know a perfect match? Sharing is caring

Native DevOps

Native DevOps

Since DevOps as practice has gained huge amount of attention and popularity, new variants keep popping up like in the game whack-a-mole. Amidst this whirlwind, it is important to remind yourself – What is True DevOps?

DevOps lifecycle

The Problem

In a traditional setting, a gap exists between producing value (development) and delivering that value to the customers (operations). This gap slows down the production of any customer value, and in many cases, what is being produced does not have anything to do with the real value that is actually expected.

The second root cause for issues is speed and feedback. Traditionally, programming is a slow task to complete and so is the integration and deployment of changes. Also, the chasm between developers and end customers can be too wide to provide any sort of tangible feedback for the produced solutions, at any scale.

The third major problem is that Agile has become the driving force in software development. This new Lean based methodology promises distributed decision-making and faster time-to-market. Unlike traditional frameworks like ITIL, success is not built on a command-and-control structure. This means that development teams often try to optimize their work and outcomes without understanding the bigger picture. This has resulted in major issues in the operations of these solutions. The promise of faster time-to-market is often misinterpreted as releasing new features as fast as possible while ignoring everything else.

The Solution

Along came the idea of DevOps. The gap between Development and Operations is eliminated by combining the two into a single team. This team now has the responsibility and the freedom to develop and operate their software product without external dependencies. The foundation of DevOps was built on top of Lean principles and automation. The core pillars of DevOps are Flow, Feedback and Continuous Learning.

The Flow is based on Lean’s continuous flow of small batches. Continuous flow is often described as shortest sustainable lead time. Small batches refer to the fact the smaller the item size the easier it is to control the flow. The Flow in DevOps is achieved through Continuous Integration & Continuous Deployment (CI/CD) and Infrastructure as Code (IaC).

Feedback is the guiding light for DevOps. There is no DevOps cycle without fast and continuous feedback on every phase of the cycle. Continuous flow requires data-driven decision-making. Fast feedback is achieved through test automation and automated monitoring. There is no CI/CD without test automation. Automated monitoring enables e.g., proactive responses to potential incidents.

Continuous Learning is about e.g., failing often and failing fast. It also about using the received feedback to the betterment of the team and the product. Continuous Learning is mindset of never-ending experimentation in search of new and better practices, product features and flow optimization through automatization.

The cornerstone of DevOps is the phrase ‘Automate Everything’. Automatization is the way of reaching flow, feedback, and continuous learning in the development & operation of a software product. Automation is present from start to finish in the DevOps cycle. Automation enables the DevOps team to focus on value adding tasks.

Key takeaways

  • DevOps is about Flow, Feedback and Continuous Learning
  • DevOps is a solution to a specific set of problems when creating customer value through a software product.
  • There is no DevOps without extensive automation.
Tommi Ferm

Tommi Ferm

At Gofore, Tommi works as the Head of Offering, Software Testing & Software Quality Assurance. His colorful journey took him from Software Testing, Management Consulting to Offering Development. Tommi is passionate about Value Creation, System Thinking and Continuous Improvement. He is also avid supporter of the Lean-Agile practices and DevOps.

Linkedin profile
Jani Haapala

Jani Haapala

Jani works as a DevOps architect at Gofore. He is passionate about measurement, feedback, and continuous automated quality feedback loops. Jani’s journey started from manual testing and has evolved to full-scale software development automation. Jani thinks that automation can help everybody and increase value in anything.

Linkedin profile

Do you know a perfect match? Sharing is caring

Is your business Agile?

How to tell if your business is Agile? How to ask from a C- level people whether their organisation is Agile?

From small to big

There are tons of advice on how to measure agility at a team level; Velocity, Burndown, Planned vs Actual, Work in Progress, DevSecOps, etc. All these measures aim to estimate and optimize the long-term work capacity and work quality of a team.

However, when assessing overall Agility at an organisational level, we need to take a step back. The main idea of a business is to deliver maximum added value to stakeholders. Therefore, we need to bring our attention from “HOW” to “WHAT”. The main idea of Agile is to accelerate the decision-making-and-learning loop. The Decision Driven Organization by Harvard Business Review 06/2010 had a Quick Test of Decision Effectiveness, which describes nicely the Agile mindset:

  • Quality: When looking back on critical decisions, you find that you chose the right course of action most of the time.
  • Speed: You make critical decisions much faster than competitors.
  • Yield: You execute critical decisions as intended most of the time.
  • Effort: In making and executing critical decisions you put in exactly the right amount of effort.

Agile Leadership

Without leadership, an organisation slides down to an ad-hoc Limbo. While being agile, the organisation still needs a strategy, a long-term vision. Parallel, you must avoid mid-term planning. Mid-term planning turns into a middle-management, where both the long-term vision and the short-term agility is lost. Mid-term planning is a waste.

In Agile it is OK to fail. Failure means learning. Agile is about embracing the scientific approach: hypothesize, test, analyze and decide to preserve or pivot. It is OK to run multiple hypotheses’ in parallel with the ‘Least-fit’ principle instead of selecting a single idea and running with it. Dare to start parallel strategic change projects and after some time kill the failing ones. A/B testing is a fast way to learn more.

Lean Management

Bureaucracy tends to fulfill all the gaps. Bureaucratic managers will always figure out new ways to measure, manage and control. Bureaucracy never leads to a better business.

Bureaucracy lives from the fear of uncertainty, but it will not fix the uncertainty.

Uncertainty is a byproduct of complexity. You can reduce both bureaucracy and uncertainty by making information transparent and keep the decision making as close to the operations as possible. As Scaled Agile Framework states, continuous learning focuses on relentless improvement, where improvement activities are fact-based, and increase the effectiveness of the entire system instead of silos.

Management is about making decisions. Making hard decisions is difficult. A weak manager easily lists a dozen of things where you must focus next. “Let’s first focus here, and then here, and let’s keep our options open also here”. This is slack decision making and a waste of energy. A strong manager lists only one thing. “Let’s focus here and say ‘No!’ to everything else”. In an Agile mindset, you focus a period of time on the thing that matters the most. Then you reflect and learn.

Complex Business

Reductionism means that you can solve a problem by breaking it down into smaller parts, solving the small problems, and then putting it all back together again. This works in a simple environment. However, the assumption that past experience leads to a deterministic future solution is valid only within an ordered system. With enough data, you can find a correlation between anything. “People drown when they eat more ice cream”. But this does not imply causation. Problems arise when the work is based on wrong assumptions. In a complex environment, you need trials, where you break the system into components, study their interactions and create a holistic model of the system as a whole. The situational awareness model helps you with such systemic decision making.

Small investments can be made into safe-to-fail experiments in a balanced portfolio before committing more resources. Traditional enterprise control mechanisms doesn’t work with Agile mindset, while traditional annual budgeting doesn’t support the fail-fast principle. Still, you can still make agile decisions along the way about what to do with the budget.

Agile Resiliency

Even an Agile organisation needs a fast re-planning mechanism, which triggers if a positive opportunity or a negative risk materializes. Re-planning means you stop the ongoing planning cycle, re-plan a new cycle and keep going. As long as the long-term strategy is viable, you need to reset only the latest short term cycle. You are able to make fast decisions concerning the short term direction.

Again, at the event of sudden change, a weak manager tends to re-plan everything from long term strategy to mid-term plan and finally to short term actions. This is slow, slobby and useless. At the event of a sudden change, you need to make fast decisions. Fast decisions are a sign of resilience. Fast decisions enable you to exploit sudden opportunities and avoid quick threats.

Agile Predictability

Embracing Agile by Harvard Business Review 05/2016 states “Some executives seem to associate agile with anarchy.” While an Agile organisation can make fast adjustments, the wider direction is still based on a long-term vision. Often the vision translates into a short-term roadmap, which customers can use safely to plan their own activities. In addition, the roadmap prevents individual teams from being siloed off. Vision and roadmap empower decentralized decision making by enabling everyone to work towards a common goal.

Agile checklist

  • Have a transparent vision
  • Near term future is churned into a prioritized backlog
  • Most of the time is used on the items at the top of the backlog
  • Measures are transparent and they provide additional value for the stakeholders
  • Seek automate everything and have an effective DevOps running
  • Constantly innovate on how to deliver value faster for your stakeholders
  • People feel safe and they trust on each another
  • People start their own initiatives. The swarm intelligence produces new business opportunities
    Have an ability to have difficult conversations.

If the article got you interested, scared or angry, please do not hesitate to contact us. We are here to help.

Jari Hietaniemi

Jari Hietaniemi

Jari Hietaniemi is an enthusiastic digitalization consultant. He specialises in complex and vast software projects. His philosophy is based on thinking that a consultant must know technology, architecture, project management, quality assurance, human resources, coaching and sales. His versatile experience and constant quest for improvement help to finish projects successfully and to bring new drive into client organizations.

Linkedin profile

Do you know a perfect match? Sharing is caring

Design system tools of the trade

At the beginning of November, a group of Gofore designers and developers working in various design system projects gathered together to share their experiences. Participants from four different projects shared their experiences on tools they use, best practices, and other things they have learned along the way.

In this blog post, we’ll take a closer look at the tools used for designing, developing, and documenting design systems.

Design tools

Design systems usually provide some kind of design library or style guide for UI/UX designers working on product features and user flows. The chosen for producing this design library must of course align with the tools used by the designers.

Design and collaboration tools used in Gofore Design system projects

It is not surprising that three out of the four Gofore design system projects that shared their experiences use Sketch. When Sketch was first released in 2010 it was considered the game-changer in the user interface design field, and since then it has established a position as an industry standard. Sketch has a solid ecosystem of plugins that help to expand its core features and automate tasks. However, in recent years Sketch’s dominance has been contested by new tools with fresh ideas.

Sketch’s main competitor Figma has raised interest in the design community with its all-in-one approach, emphasis on collaboration, designer-developer handoff and design system-oriented features. Also, the way Figma’s handles styles is more flexible and the layout behaves a bit more like HTML box model than Sketch’s – something you might expect from a tool that is purpose-built for designing user interfaces. While Sketch is MacOS only, Figma is web-based, and also provides a desktop app for both Mac and Windows.

One of the alternatives for Sketch is Adobe XD. For now, XD is generally not considered to be a prime choice for design systems, but since Adobe has a stronghold virtually all other design tools, it can’t be counted out of the competition yet.

All in all, there are strong signs that Figma is already undermining Sketch’s dominance on the market. Even though all of the four design systems have initially used Sketch, two of the teams have at least considered the prospect of switching to Figma and one has already made the switch – and the switch has reportedly made the consistency of user interfaces better.

Design collaboration and versioning

One of the main reasons why Figma has made a foothold on the market is its emphasis on easy collaboration.

True collaborative workflow with Sketch can still be a bit cumbersome, even though Sketch launched their own browser-based file sharing and collaboration tool Sketch Cloud at the beginning of 2020. Its functionalities are still fairly basic – for example, the file inspector tool is still in beta – but it is definitely a step in the right direction.

So far Sketch’s shortcomings in basic collaboration features have usually been patched with Abstract – a separate design library tool that pairs with Sketch through a plugin. It brings a git-like version control and branch-based workflow for designers and enables reviewing, commenting, and inspecting design files. Abstract also helps to share library files with product teams, making it a prime tool for design systems. Like Sketch, its desktop app is available only on Mac, but it also provides a web-based interface.

Although Abstract is a great tool that has fundamentally changed the way designers can work together, it doesn’t always play nice with its counterpart. The fact that the whole collaboration workflow depends on two separate tools made by different companies, makes it more vulnerable to compatibility problems. From this perspective, Sketch’s strategy to expand its core features with plugins turns from its best feature to be a burden. This is the pain point Figma aims to solve with its built-in collaboration and communication features.

Figma does not support git-like branching workflow like Abstract, but the Professional plan offers an unlimited version history and sharing design system libraries for the team. The pricier Organisation plan also offers really promising design system analytics tools for measuring design system adoption and usage. This is hard to accomplish with Abstract, which only allows inspecting library dependencies file by file.

Time will tell how Sketch Cloud will develop and can it even Sketch’s odds in the competition with Figma.

Development tools and frameworks

Providing reusable and customisable components for developers is the basis of all design systems. Like design tools, the technologies design system components are built on is dictated by the technologies used in the actual products.

Front-end frameworks used

 

Among front-end-frameworks, the winner is clear: all four of the design systems are built with React.

Third-party libraries like Styled components are also commonly utilised to make component development easier and faster, and to help for example tackle tricky functionality and accessibility issues. However, even though these libraries can bring short-term benefits and time savings, most design systems tend to aim to keep dependencies to a minimum to avoid problems rising when the system grows more complex. Building design system quality components – especially accessible ones from the ground up is more time-consuming, but when the design system starts to grow in scale, managing dependencies can start to weight down the process and bring with them unexpected issues.

Code repositories and development environments

Quite often organisations identify the need for design systems when product development has already been ongoing for a while. Product teams have already established development workflows and environments and a design system is developed as part of these existing environments to bring consistency and reduce redundancy in development work.

In the tools used for versioning and distributing design system repositories projects are two-fold. Two of the projects use GitHub. Both of them have shared their design system open source, so Github is an easy solution for code collaboration. Two of the projects use Azure DevOps, a part of Microsoft Azure cloud computing services which provides more comprehensive toolkit for DevOps and project management.

Other alternatives are for example GitLab and Bitbucket. All these tools are based on Git, but they all offer different DevOps tools for CI/CD, testing, project management, etc.

Component development / catalog platforms used

Most design systems also use development environment tools like Storybook and React Styleguidist. How these tools are utilised in the workflow vary between projects. Usually, they are used as a platform for developing and testing UI components, and as a component catalog for developers showcasing components and their functionalities, and documenting available properties.

Documentation platforms

Without documentation providing principles and guidelines for using the building blocks both in design and code, the design system is only a pile of components without a clear purpose.

The best tool for a design system’s documentation needs depends mainly on who should have access, and who should be able to add content to it.

Documentation platforms used

 

Many design systems maintain a public documentation website – especially open source systems. Even if the actual components and design assets are kept private, a public documentation site can act as a marketing tool showcasing the organisations design approach.

Three out of the four design systems had a custom made public documentation site – all built on the open source static site builder Gatsby or some more documentation site specific variation powered by it, for example docz. Other open source alternatives are for example Docusaurus and and Cupper, both advertising themselves to provide good accessibility – which docz has some serious issues with.

If the design systems is private and used within a single organisation, the documentation can also be managed in the organisations internal workspace like Confluence. One of the four projects used confluence for the entire documentation, one only for keeping record of design system related processes that didn’t belong to the main documentation site.

Both approaches have their pros and cons. A custom site gives the team free hands to build the documentation as they see fit: embed live demos and component playgrounds to documentation, automate token listings etc. But compared to Confluence, Gatsby site needs more work to set up and maintain. It also demands at least basic coding skills from content editors, since documentation is written in markdown and occasionally HTML or React. And, like any dependencies, the site builder can also become a burden. Gatsby in it self is popular and well maintained, but for example docz is not supported anymore, so it has many open issues that have been left without fixes.

The benefit of keeping documentation on an internal platform is that it is easy to set up and maintain, and it’s easy even for non developers to add content. But Confluence is not exactly made for this kind of use, and its features can become limiting. For example it does not provide many possibilities for automating, integrations or live component demos and playgrounds, so more documentation work has to be done by hand.

For this reason, the choice of documentation tool is surprisingly important one. Up-to-date documentation is crucial for the success of the design system, and If documentation is too laborious to produce it can become stale, loose its purpose and make the whole design system a dud.

Consequently, new documentation platforms have started to emerge on the market. Services like Zeroheight and Frontify aim to combine the flexibility of custom built documentation sites with the ease of content edition of workspace platforms. They also provide powerful integration tools with common design and development tools. Zeroheight and Frontify don’t come free, but can help saving considerable amounts of precious time and resources spent on documentation tasks and make the documentation more relevant and easier to digest for designers developers. Services like these, might well be the next big thing in the field of design systems.

Summary

Tools by project

 

The choice of tools can make or break a design system. Tools that do not fit the needs of designers, developers and other stakeholders can hinder the systems growth and adaptation. Tools are also constantly evolving, but changing tools along the way is tedious and should not be made based on trends only.

This is why the tooling of the systems should always be carefully considered right from the start, based on the needs of the products and teams it is built to serve.

Read also: The recipe for a successful Design System

Eemeli Nieminen

Eemeli Nieminen

Eemeli Nieminen works at Gofore as a visual designer specialising in design systems and user interface design. He is particularly interested in the ways of visualising information, theories of visual language, and peculiarities of human perception and cognition. As a designer, he strives to make the digital world more humane and have a genuinely positive effect on peoples’ lives and society. In his free time, Eemeli most preferably spends unwinding in nature.

Linkedin profile

Do you know a perfect match? Sharing is caring

Meet your daily rivals

At a classic car event in 2015, I met someone with an impressive collection of cultural heritage on four wheels. When I asked: “What is your daily drive?“, he whispered to me saying that he drives a Mitsubishi electric and would never go back to combustion. To me, that was an unexpected answer. Among car enthusiasts, electric cars had been considered as sexy as long underwear. Impressed and shocked at the same time, I too began to get interested in electric cars. Eventually, while we were working on designs for E.ON charge poles, I took the opportunity to buy a second hand fully electric car for “business reasons”. Have you ever experienced driving an electric car? While it is not the immediate sensation of “wow” like your first motorbike ride, it quietly and slowly takes over you. The acceleration will make any Porsche driver jealous when racing you to the next set of traffic lights. And, to no surprise, it is as easy to operate and maintain as a hairdryer.

I enjoyed my time as an “early adopter“, though it seemed that everybody else knew better than me why this cannot be the future: “Don’t forget the rare minerals in Congo are almost depleted!” and “I heard it’s dangerous to grab the door handle when it’s raining!”. As with many topics, forwarded and unevaluated messages on social media are the main source of “information” to a substantial amount of people.

I attended the electromobility congress “Hypermotion” recently and a presenter thought he would surprise the audience with the fact that “Surveys show that 70% of electric cars are being charged up at home!”
Excuse me? If you would drive electric you would know that there is mostly no other opportunity, as the few charge poles available to us in the cities are permanently sieged.

In the beginning, I figured a public charge point would be a place to meet nice people with the same attitude; a place to chat about sustainability or new technologies. But when access to the ever scarce source of energy is contested, things are quite different…We’ve all seen people reserve deckchairs at the hotel pool in the early morning, regardless of whether they actually show up. I can confirm that the same strategies are being used at the charge point. “Someone else might take it if I do not.”

Here comes a twist: the true benefit of space at the charger is not to improve the battery status of your car, but rather the unlimited parking time for free. Engineers gave us fast DC charging, allowing you to charge up to 80% in no more than 30 minutes. But half an hour is too long for a cigarette break and too short for a shopping tour downtown. That is why the slow AC plug is so popular and desirable.

There are some reasons why they haven’t built as many charge points as would be required. One is the measuring of electricity.

Shouldn’t measuring the amount of used electricity be as easy as it is at home? Sorry, we had to reinvent it and that takes time in good old Germany. For the time being, we have to wait for more public chargers that are compliant with the specs of Physikalische Technische Bundesanstalt.

Whenever technology is about to change, we tend to focus on the issues we will face. And while that can be a healthy attitude, I wish everybody also considered the advantages that justify overcoming those challenges. We got used to petrol and diesel engines, as we have those in operation for 100 years already. Go ahead and look inside one or take one apart, you will be amazed at how complex they have become and how difficult it is to get them to run. Hundreds of metal and rubber components, all running in a dirty bed of oil. A complex organism with countless weak spots and generally unsustainable behavior. All of this has been made completely obsolete by a small, silent, and clean motor with no need for maintenance.

Electromobility is new and it is fantastic. Let us develop this new and sustainable technology and the infrastructure hand in hand as fast as possible. We at Gofore are convinced that this is the future on the way to the zero-emission goals. We have been working on various related projects for years: we designed ergonomic public chargers, we developed state of the art software for charge platforms and we will be happy to make further contributions.

marcusanlauff

Marcus Anlauff

As a graduate designer, Marcus has been designing electronic devices for various product areas for over 30 years. What motivates him? The demand to design user-friendly and at the same time technically durable products. And best of all with the use of alternative materials. As an ambassador for the use of sustainable materials in production, Marcus has already developed several award-winning products made of cardboard, for example. Since 2017, he has been increasingly involved in the field of electromobility, which he believes currently offers the greatest potential for innovation. His expertise is underlined not only by the pioneering charging columns designed for E:ON, but also by his sense for emerging new trends on the market.

Linkedin profile

Do you know a perfect match? Sharing is caring

We said we had information security,

But were asked about its maturity.

So we took a route to remove any doubt,

By certifying it for tendering surety.

 

 

That’s the situation we were in toward the close of 2019. We had our information security management system (ISMS) only somewhat documented and our onion rings of protection still needed some growing. When prospective customers asked for our information security credentials, we provided these. But often they were unimpressed. They further inquired about our security practices, security risk management, and level of awareness. And finally, they asked were we information security certified. Or an application for tender asked this simple question “Is the company ISO 27001 certified?” No meant that we were already at a disadvantage before the tendering competition even began. Or in some cases it meant we were not eligible to even apply. Clearly this could not continue; we had to turn disadvantage into opportunity.

It started with top management accountability and commitment

In December 2019, the Gofore Security Team rose to the challenge and proposed to the management team, with evidence of tendering disadvantage in hand, that our ISMS should be formalised and certified to meet customer expectations. This was the critical first step because such an impactful company-wide project categorically requires the sanction, commitment and sponsorship of top management.

We proposed that *ISO27001 should be the internationally recognised standard to certify to. Top management approved the green light to start the ISO 27001 project on condition that we, quote CEO Mikael Nylund, “don’t do anything stupid”. The project was overseen by Chief Information Security Officer Jani Lammi, led by Secure Design Consultant Niall O’Donoghue who advocated for ISMS certification, with Security Consultants Tapio Vuorinen and Akseli Piilola as specialist project members.

Next, we scoped the ISO 27001 project. This was a critical second step (after top management approval) and so, to avoid doing “anything stupid” already, we delved into sources of advice and past experience that warned too wide a scope had doomed many organisations’ first attempts at certification. We really didn’t want to join that wall of shame. When the scope was agreed, it simplified identifying associated tangible and intangible assets, and that in turn simplified identifying stakeholders i.e., those with a vested role, responsibility, or interest in how the ISMS protects the assets within scope they utilise. Keeping within the certification scope was essential for formalising our ISMS benchmark that other company sites could aim to comply with.

That human connection called communication

Another pitfall we were careful to want to avoid was miscommunication. This was a delicate balancing act because the two leading causes of miscommunication in business are lack of communication whereby no one knows what’s happening, and excess communication whereby too many messages lead to key take-aways being missed or buried. We aimed for a cosy middle way with monthly stakeholder update meetings, several strategically issued awareness-raising broadcasts, a survey, and direct contact to key stakeholders.

Direct stakeholder involvement was crucial for information security risks identification and assessment. ISO 27001 is heavily based upon identifying relevant risks and applying human, process or technical mitigating controls to ensure sufficient asset robustness and resilience against ever-present threats. Risks identification and controls implementation was a time-consuming activity.

Soliciting employee (aka Gofore crew) opinion and input were achieved by means of a security pulse survey since, after all, crew are the company’s most valued asset, so their observations and recommendations must be taken into account. In a company with a flat organisational structure and a culture of transparency and self-determination, crew acceptance of and compliance with ISMS improvements is essential for effective security in practice.

Preparing for ISO 27001 certification consumed a lot of time and resources not only from the Security and IT teams but also from key stakeholders. Keeping the many ISMS facets being improved under control required proactive timely planning, and a ticket-workflow methodology, and fine-tuning along the way. Focusing on the critical and prioritising resolution of non-conformities and long-term ISMS deficiencies was central to getting things progressively done toward compliance.  An internal audit, pre-audit and stage 1 audit were concurrently project delivery targets and progress checkpoints.

The project began in December 2019 and it ended in December 2020 with the certification audit which occurred over four days with thirty-two interviews and four office site tours. We achieved ISO 27001 certification, which probably confirms that we didn’t “do anything stupid” and now we can bask in the celebration of our achievement for a while. But in celebrating, we must not forget why we did this project. The top four reasons we certified our ISMS were 1. to raise the level of security awareness in the Gofore crew, 2. to evolve the ISMS to a condition expected from a digitalisation company, 3. to control identified security risks the company constantly faces, and 4. to win more customer project tenders.

The project is over but the program continues

Security is sure to fail if its kept separate from everyday business. Security cannot be bolted on either humanly, process-wise, or technically, it must be integrated and seamless as sensibly possible. So, the ISMS program must continue as a normalised aspect of everyday business. After the certification audit, business stakeholders and crew are still as relevant for ensuring ISMS effectiveness, and so the collaboration continues. There are more risks to identify and controls to implement. There must be ongoing security awareness to ensure newcomers and contractors comply with our information security policy. An annual review and audit of our ISMS must be conducted. Our ISMS must not regress, it must progress as business progresses.

 

Niall O’Donoghue on behalf of Gofore

 

You will find Gofore listed as ISO 27001 certified via https://www.kiwa.com/fi/fi/palvelutyyppi/sertifiointi-ja-arviointi/sertifikaattihaku/

* ISO27001 is a security standard that provides a framework for establishing, implementing, operating, monitoring and maintaining an ISMS. ISO 27001 is extensively accepted as the highest security standard in the information and communications technology industry for verifying the efficiency of an organisation’s overall attitude to security.

 

Niall O'Donoghue

Niall O'Donoghue

Niall is a secure design best practices advocate, coach and promoter. His experience includes seeding the secure design mindset and best practices for private sector Internet of Things web applications and facilitating threat analysis workshops for public sector web application projects. Niall is also passionate about helping organisations to evolve their overall security maturity.

Linkedin profile

Do you know a perfect match? Sharing is caring