Artificial Intelligence and algorithms are shaping our work.
“It is important to prepare for a hybrid workforce in which AI and human beings work side-by-side. The challenge for your business isn´t just ensuring you have the right systems in place but judging what role your people will play in this new model. People will need to be responsible for determining the strategic application of AI and providing challenge and oversight to decisions.” The PwC report, Sizing the Price, 2017
The world is transforming. COVID-19 has revealed our vulnerability and has pinpointed complexities. Lead time to make a positive impact has been cut shorter. Power structures of the past are not future proof. In my opinion, and backed with a multitude of fresh research, there are three key drivers of new ways of working.
First, organizations relying on control and command are replaced with fluid, transparent and self-determinant ways of working. Hierarchical control and command power structures are fading away, slowly but surely. Second, digitalization is continuing at a logarithmic speed. For example, in Spain, the first 60 days of COVID closure is said to have accelerated the county´s digitalization by seven years (El Confidencial, 2020). Third, as interconnected emerging technologies continue to advance and converge, they will hit hard knowledge workers, managers, and the C-suite. Old competencies will be questioned, and new ones required. In this development, the main drivers will be Artificial Intelligence (AI) & Data and particularly algorithms. Leaders will be heavily affected. They need to become more people-positive and complexity-conscious to activate the full potential of the hybrid workforce, combining humans and machines of tomorrow already today. Leaders need to return and excel in their number one role, which should be a coach for his / her people.
I am not a data analyst, nor a coder, nor an AI & Data expert, but an experienced organizational development professional. Therefore, my focus is the management and leadership of individuals, teams, and organizations. Thus, I would like to extend the discourse of only technical matters of AI & Data more to required new leadership skills, capabilities, and structures. I feel that the prevailing way of looking at management automation thru a technical lens, only with AI & Data mostly as a cost reduction practice, is short-sighted and dangerous. AI & Data doesn´t intrinsically make changes, but they are enablers for societies and organizations to progress. No doubt, AI-enabled future economies will have massive shifts in workforce competency requirements at all organizational levels, including experts and executives. To cope with fast-paced digitalization, organizations need to adapt fast, get organized in new ways, and unleash the full potential of both people and machines to thrive. Is the unstoppable development of AI & Data the cure or cancer of business transformations?
Constant Human Desire for Better Technologies
Throughout history, we have been seeking help from technical inventions supporting output efficiency and process accuracy. In the western world, we have traveled the journey from the steam engine to electricity to computer to 24/7 AI & Data of today. This development has been backed with old management theories, mostly Taylorism (scientifical management) and hierarchical power structures based on control and command both affecting us still today. The first and second industrial revolutions focused on the human body, the third and current fourth on the human mind. By now, we as humans have lost the performance game both in terms of body and mind to AI & Data-driven digitalization, algorithms, and robotics.
According to Moore´s Law of computer processing power duplicating every 18 months, this is just the beginning, not the end of this development in AI & Data. Fast AI developments, like machine learning (ML) and deep learning (DL), have big economic implications for our societies and organizations. Price Waterhouse Coopers has predicted that AI´s contribution to the global economy will be USD 15,7 trillion by 2030. What are the possibilities and limitations of algorithms?
Current Pros and cons of Algorithms at Work
Being an expert or not, one should understand that AI is NOT just another new technology, but AI is creating convergence and binding many emerging technologies together. AI is not neutral. It is always mirroring directly or indirectly its creator´s perceptions, experience, and values.
- Discern patterns in raw data with a faster and better than star data analysts.
- Optimize processes more in detail and precision than best operations managers.
- Analyse and predict behavioral trends and implications for humans.
- Assess and comply with complex variables for top decision-making better than executives.
- Excel, like a savant, in a tightly limited sphere of expertise.
- Expand some precise human capabilities to the next levels out of our reach.
Algorithms cannot (as of autumn 2020):
- Build sense-making and connect unrelated ideas into a new creation, innovation.
- See things from different perspectives, even with ML only programmed views are seen.
- Act with intuition, build trustworthiness, and show situational emotional intelligence.
- Store as rich data (sights, sounds, sensations, smells, or emotions) as our brain does.
- Use experiences and understanding across unlimited sets of situations, like the human-brain.
Algorithms can increase performance in process speed, efficiency, and accuracy. In the fields of innovation, creativity, critical thinking, and sense-making thru intuition, AI at the time of writing only offers a poor imitation of what humans can do naturally. Thus, people can better combine, create, and bring to the game, more thorough insights. The holistic situational sense-making is the key difference between algorithms and humans. Therefore, the multifaceted thinking of leaders is a must. They need to be able to think simultaneously in a three-dimensional way: first reactive for the situation at hand, second proactive for things not yet seen, and third reflective for continuous improvement. The reflective means of an inner capability to confront critically one´s own actions instead of accepting them without questioning autonomous outside-in thinking and data available.
Trustworthiness of Algorithms Under Deep Scrutiny
Many research studies reveal that humans feel concerned, suspicious, and uncomfortable when dealing with algorithms that make decisions on their behalf. As a general trend, people perceive the functioning of autonomous algorithms as something of a black box. This suspicion stems from the lack of transparency of how algorithms were generated and the difficulty of explaining algorithms clearly for non-experts and even for engineers, data analysts, and coders. This lack of knowledge has created fully reasonable prejudices to distrust algorithms in work settings. Controversially, at their leisure, people trust algorithms to make choices and affect their purchase patterns in services like Amazon, Netflix, or Spotify. Even more weird are research findings where people accepted to be replaced in their daily work by an algorithm rather than another person.
Thus, to build employees’ and customers’ trust towards algorithms at work, immediate and systematic visible actions are required. Just recently, Google has announced to offer help to other companies with the tricky ethics of AI (Wired, 2020). One internal way to grow trust towards algorithms is to first to break silos between AI & Data experts and experienced leaders. The objective should be to engage people with diverse backgrounds and experience to work together towards a common purpose considering both efficiencies and ethics of AI & Data. For example, this type of challenge has been tackled in some of the most affluent digital companies like Microsoft and KPMG with their AI ethicists. The role of these AI ethicists is to secure transparency, ethics, and outcomes of algorithms before they go live. How about working with algorithms?
People and Data-Centric Combo First Steps
As a practical example, it was decided at Gofore Plc. in 2006 that all clerical tasks that can be automated will be automated. Bot-manager “Seppo” supports commercial decision-making and bot-assistants “Genie and Granny” take care of daily routine tasks. For example, they remind about unmarked billable hours, confirm holiday requests, make travel expenses claims, provide personal performance statistics, and report company-wide statistics. These scalable HR bots, combined with Lean & Agile ways of working, have enabled fast and continuous financial growth with less management and lighter structures simultaneously releasing time for leadership, development, and customers.
There used to be a time when we were doing more with less. Now we are doing less with less. One possible solution to overcome this dilemma is a hybrid workforce. As within Gofore, these modern digital tools, like bots, robots, robotic process automation (RPA) and other digital means, are key enablers of the future of work, but one should not get on the bandwagon blind-sighted. Some recent not so good examples of failed algorithms are the A-level and GCSE examinations fiasco in the UK in August 2020 (The Guardian, 2020) and the Amazon recruiting disaster of 2018, where only white males were selected from applicants by algorithms (Reuters 2018). After an internal investigation of the case, it was noticed that the recruitment algorithms of Amazon could not work without supervision to build sense-making of the desired future with past distorted data.
On the other hand, one of the largest Finnish digital forerunners – Tieto Ltd. – has implemented an AI assistant named Alicia who is a member of the management team with full voting rights. Alicia has reminded the company´s board members of important data and statistics and helped them to make smarter decisions. Similar developments are occurring around the world. For example, AI assistants like IBM´s Watson can bring together complex data from various sources, analyse their trends against a company´s internal metrics and business objectives, and present suggestions based upon its findings (Rouhiainen, 2020).
Best of Both Worlds, A Hybrid Workforce of Tomorrow
The changes that algorithms are bringing to the business world are massive. As mentioned, they can drastically increase efficiency, accuracy, and speed to run businesses, but they are still lacking human-specific traits like emotional, innovation, and critical thinking skills that finally make or break businesses. AI & Data development will not only be about technology. The time for solely control and command-driven managers and administrators is over. I perceive that transformed businesses will need a new kind of curious, creative, and critical thinking leaders with emphasis on interpersonal and emotional skills. Effective leaders will need much more of soft human skills to understand human dynamic systems to be able to motivate, innovate, facilitate, and assimilate business impact better than in the past. I believe that the biggest challenge is to renew our mind-set to positively challenge and combine best of both worlds – human and machine – as a hybrid digitalized workforce.
I hope experts on AI & Data, digital-pioneers, and executives, together with social scientists, take a courageous stand on AI & Data system and related algorithms with clear roles, responsibilities, and preventive rules. Together we need to influence the evidential and non-stoppable development of AI & Data with a positive impact, creating social, economic, and environmental added value for all. If digitalization is derived from natural human capabilities rather than performance only, I believe that this is not a race against the machine, but a race with the machine. The choice is ours.
“As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. Understanding what AI can do and how it fits into your strategy is the beginning, not the end, of that process.”, Andrew Ng, the world-renowned expert in machine and deep learning
“Ignorance is never better than knowledge.”, Enrico Fermi, winner of the 1938 Nobel Prize for Physics
What kind of data companies have the most? Most likely text data like Word and PDF documents. For example, there could be documents about customer feedback, employee surveys, tenders, request for quotations and intranet instructions. International companies have those documents even in multiple different languages. How can you analyze multilingual documents with Natural Language Processing (NLP) techniques?
NLP is a subset of Artificial Intelligence (AI) where the goal is to understand human’s natural language and enable the interaction between humans and computers. The interaction can be both with spoken (voice) or written (text) language. Nowadays, many latest state of the art NLP techniques utilize machine learning and deep neural networks.
One of the NLP tasks is text classification. The goal of text classification is to correctly classify text into one or more predefined classes. For example, customer feedback text document could be classified to be positive, neutral or negative feedback (sentiment analysis). Request for quotation document could be classified to the backlog of the correct sales team of the company. Thus, the NLP model gets text as an input and outputs some class.
During the last couple years, NLP models based on the neural network “Transformer” architecture, like Google’s BERT model, have broken many records of different NLP tasks. Those models are really interesting and have even made headlines like too dangerous to be openly released. However, they mostly have only supported English or other popular languages. What if you would like to classify text in Finnish or Swedish or both?
Multilingual text classification
Until recently, openly released multilingual NLP models like Google’s multilingual version of the BERT have not performed as well as monolingual models especially in low-resource languages like Finnish. For example, monolingual Finnish FinBERT model clearly outperforms multilingual BERT in Finnish text classification task.
However, at the end of 2019 Facebook’s AI researchers published a multilingual model called XLM-R supporting 100 languages including Finnish. XLM-R was able to achieve state of the art results in multilingual NLP tasks and also be very competitive against monolingual models in low-resource languages. This new model looked very interesting so I decided to try it out for multilingual text classification.
Hugging Face’s “Transformers” Python library is really awesome for getting an easy access to the latest state of the art NLP models and using them for different NLP tasks. XLM-R model is also available with the Transformers library. We can take the pre-trained XLM-R model and utilize “transfer learning” concept to finetune the model to for example classify news article texts to news category classes. In the context of these NLP models, transfer learning means having a pre-trained general-purpose NLP language model which has been trained on a large text corpus (XLM-R was trained with more than two terabytes of text data!) and then the model is further trained with a lot smaller dataset to perform some specific NLP task like text classification.
For this experiment, my goal is to finetune the XLM-R model to classify multilingual news article texts to corresponding news categories. That is a supervised machine learning task so the dataset I am using is a labeled dataset containing news article texts and their category names. Another really interesting feature of the XLM-R and other multilingual models is their “zero shot” capability meaning you can finetune the model with a dataset of only one language and the model will transfer the learned knowledge to other supported languages as well. Since I am especially interested in Finnish language capabilities of the XLM-R model, the dataset contains only Finnish news articles with their categories. Thanks to the “zero shot” capability, the XLM-R model should also be able to classify news articles in other languages too in addition to Finnish. You can see an example of the dataset in the table below.
In total, there are only 3278 rows in my dataset so it is rather small but the power of earlier introduced “transfer learning” concept should mitigate the issue of small number of training data. The dataset contains 10 unique news category classes which are first changed from text to numerical representation for the classifier training. The dataset is also splitted into train and test sets with equal distribution of different classes. Finally, the XLM-R model is trained to classify news articles.
In the picture below you can see training and validation losses which both follow quite nice downward trend on training steps which means the model is learning to do classification more accurately. Validation loss is not increasing in the end so the finetuned XLM-R model should not be overfitted. Overfitting means that the model would learn too exactly classify text in the training dataset but then it would not be able to classify new unseen text so well.
Another model evaluation metric for multiclass classification is the Matthews correlation coefficient (MCC) which is generally regarded as a balanced metric for classification evaluation. MCC values are between -1 and +1 where -1 is totally wrong classification, 0 is random and +1 is perfect classification. With the testing dataset, the MCC value for the finetuned XLM-R model was 0.88 which is quite good. The result could be even better with larger training dataset but for this experiment the achieved performance is sufficient.
The most interesting part of the finetuned XLM-R model is to finally use it for classifying new news articles what the model has not seen during the earlier training. In the table below, you can see examples of correctly classified news articles. I tested the classification with Finnish, English, Swedish, Russian and Chinese news articles. The XLM-R model seemed to work really well with all of those languages even though the model was only finetuned with Finnish news articles. That is a demonstration of the earlier mentioned “zero shot” capability of the XLM-R model. Thus, the finetuned XLM-R model was able to generalize well to the multilingual news article classification task!
Multilingual vs monolingual NLP models
In the original research paper of the XLM-R model, researchers state that for the first time, it is possible to have a multilingual NLP model without sacrifice in per language performance since the XLM-R is really competitive compared to monolingual models. To validate that, I also decided to test the XLM-R against monolingual Finnish FinBERT model. I finetuned the FinBERT model with the exact same Finnish news dataset and settings than the earlier finetuned XLM-R model.
Evaluating performances of the FinBERT and XLM-R with the testing dataset showed that the monolingual FinBERT was only a little better in classifying Finnish news articles. In the table below, you can see evaluation metrics Matthews correlation coefficient and validation loss for both models.
This validates findings of Facebook AI’s researchers that the XLM-R model can really compete with monolingual models while being a multilingual model. While the FinBERT model can understand Finnish text really well, the XLM-R model can also understand 99 other languages at the same time which is really cool!
Experimenting with the multilingual XLM-R model was really eye-opening for me. Especially, the “zero shot” capability of the XLM-R model was quite jaw dropping at the first time when you saw the model classify Chinese news text correctly even though the model was finetuned only with Finnish news text. I am excited to see future developments in the multilingual NLP area and implement these techniques into production use.
Multilingual NLP models like the XLM-R could be utilized in many scenarios transforming the previous ways of using NLP. Previously, in multilingual NLP pipelines there have usually been either a translator service translating all text into English for English NLP model or own NLP models for every needed language. All that complicates the pipeline and development but with multilingual NLP models everything could potentially be replaced with a single multilingual NLP model supporting all the languages. Another advantage is the “zero shot” capability so you would only need a labeled dataset for one language which reduces the needed work for creating datasets for all languages in the NLP model training phase. For example, for classifying international multilingual customer feedback you could only create the labeled dataset from gathered one language feedback data and then it would work for all other languages as well.
This is mind-blowing and groundbreaking. One NLP model to rule them all?
The use of artificial intelligence is not as difficult as imagined. You can get started with fairly raw data. The required workloads are also reasonable, although it requires an attitude that deviates from the norm.
In artificial intelligence projects, data is utilised differently than before, often in completely unprecedented ways. Starting with projects, no one can be sure what the end result will be, when it will be ready (or will it ever be finished), and what it costs.
This uncertainty slows down the use of artificial intelligence, especially in organisations that do not want to let go of the old ways of working, predetermined plans, business cases and fixed price offers. Now is the time to dare: The longer you wait, the more data accumulates in your organisation in vain.
1. You cannot define the outcome in the beginning
Problems that have not been solved before can often be solved with the help of artificial intelligence. I myself have been involved in solving problems which were not even known to exist at the start of the project.
So how do you know how to utilise AI? Who can tell you where you should apply it? From nowhere. Nobody. At least if you do not dare to start.
You will have to decide to start. You need to clear space for the exploration and exploitation of AI. You need to give your organisation an opportunity for something new and unspecified.
2. Keep an open mind for learning
In artificial intelligence projects, rather than technology, most important is the willingness and ability to create something new. An artificial intelligence project can begin to streamline an existing process, but there is much greater potential in new innovations. LED lights were not born out of making candles!
3. AI creates bridges over silos
A broad understanding of opportunities is needed because AI solutions should, under no circumstances, be utilised solely for traditional point-to-point profit centre development. This is exactly where the potential of AI lies. You can use it to combine source data from across the organisation – data that could not previously be combined. For this reason, it is very typical that the most valuable findings come as if by accident.
Don’t let the assumptions stop you
False assumptions can obstruct the path of artificial intelligence. Perhaps the most common of these are prejudices related to data protection and law, and the assumption that the quality of one’s own data is not enough.
Fairly raw data is enough to get started. Values, anonymisation, and pseudonymisation enable closed and secure applications in a closed environment. In an enclosed environment, anonymity does not break because unlimited data sharing is not possible. In many organisations, the settings are reasonably good.
Many processes have already been digitalised; data on important processes and customers can be found. Also, awareness of the potential of artificial intelligence is steadily spreading. Take the opportunity to provide your organisation with valuable personal experiences of what artificial intelligence really is and what can be realistically achieved with its help.
Begin today. You don’t gain anything by waiting!
What is a Smart City? How it is different in planning or for the people that live there? What “smartly built” or “smartly behaving” cities offer to sustainably? Is “smart” equal to eco-friendly or is sustainability more important than the absolute “smartness” of the city?
Smart cities differ from “normal” cities in their ability to predict – and maybe to be a proactive platform for services. They offer quality-of-life and happiness for their residents. When inhabitants find a problem (for example with day-care, schools, roads, safety etc.), the city is expected to react and fix it. And with lots of requirements, the cities need to prioritise their efforts and preferably find out the problems already before they are pointed out by the people.
“Scenario: Our preferred day-care close to our home is fully booked. The city offers an alternative day-care near my workplace. They provide detailed information on how I can take my children there – using a bus and a monthly ticket is the cheapest option. Another affordable and eco-friendly alternative is a bicycle with a child carrier. All the information above I received through my CityApp. We did not even have to apply for day-care, we only gave access to the city to our MyData and received these personalised services.”
In that scenario, two things need to be solved:
1) Data collection and utilisation
The city is required to gather, use and offer data as a service for various apps and solutions. Before that, it is required to understand what data is needed, what data exists what needs to be generated – and how those data can be utilised to serve the citizens.
2) User consent
To be able to receive personalised service, the user must share some of their personal data. Through MyData, personal information can be shared in a controlled manner. Combining personal data with anonymised wider data sets is an efficient way to provide well-targeted services for individuals. The handlers of the data, i.e. the cities must be conscious of data security and prevent the misuse.
Cities will face remarkable challenges in the near future with continuous growth in population and people density combined with carbon-neutral sustainability requirements. To tackle such challenges, cities must be smart in their data and knowledge-based city planning and providing sustainable services proactively to the citizens. Gofore is working on such solutions with various cities, as well as governmental organisations.
Gofore Case: City of Helsinki education division communication system
The city of Helsinki has started a project to develop communications between the authorities and families having children in day-care or at school – and to develop city services, using data and AI. Within the project, the services are developed proactively, and related data is applied widely. The development is open-sourced to guarantee easier response for future requirements. The overreaching objective is to serve the families with same data and applications from day-care requirements throughout the high schools.
In this project, Gofore is responsible for user experience design and software development of the communication system between the families and authorities.
See also another reference of City of Helsinki
Gofore Case: X-road
X-Road is an open-source data exchange mechanism that enables reliable and secure data exchange between different information systems over the Internet. As a highly interoperable and centrally managed distributed data exchange layer, it provides a standardised and structured way to integrate different types of information systems and to produce and consume services. It is an easy, cost-effective, reliable, secure, well-supported and tested solution for enabling Smart City solutions. X-Road technology is used nationwide in the Estonian public administration and in the Finnish Suomi.fi Data Exchange Layer service.
Gofore has delivered Finnish X-Road implementation and various services into Suomi.fi portal. The development continues. Additionally, Gofore is a publicly procured X-Road core developer for Nordic Institute for Interoperability Solutions (NIIS) and currently the only Gold level X-Road Technology Partner.
“Scenario: City planner uses Chatbots for planning a new neighbourhood in the city. AI behind the Chatbot gathers and analyses invaluable data for the planning from the discussions with the citizens. When the planning proceeds, the Chatbot notifies people that have shown interest or are living in the areas affected by the planning. City planners have this efficient, all-knowing colleague supporting them at their work. They predict planner needs and help to formulise and iterate for detailed enough information through citizen engagement.”
Cities are service organisations for their inhabitants and visitors – as well as a productive and lucrative working environment for civil servants. Digital technology is an enabler, with services and solutions required to be very human-centric. A smart city does not mean that people would not have to do anything, but in order to be smart, the city is expected to ease our life by providing easy choices and proactive, well-targeted proposals.
In Finland, there is an ongoing project, called “AuroraAI” that aims to implement an AI boosted operational model that is based on peoples’ and companies’ needs to utilise various services in a timely and ethically sustainable manner. Combination of services from various sources support peoples’ life-events and companies’ business-related events, facilitating seamless, effective and smoothly functioning service paths throughout the process. This provides people with a new way of taking care of their needs and overall well-being. Simultaneously, the system will promote service providers’ to form dynamic, customer-oriented service chains in collaboration with other operators and to manage their activities based on up-to-date information. Gofore is working within the programme to create such multi-disciplined for the inhabitants in Finnish cities.
Gofore case: Chatbots
Artificial intelligence makes Netflix recommend programmes for us and Facebook automatically tags recognised friends in photos. A robot car could not drive without machine learning. But can artificial intelligence also help ordinary office workers? Yes, it can. We have developed three intelligent chatbots that help people perform many essential everyday tasks effortlessly.
Seppo, Granny and Gene are text-based conversational chatbots that operate in the Slack instant message environment frequently used by Gofore employees. Seppo is the veteran of the bunch being originally developed in 2016. Seppo was born out of a real need: Seppo fulfils, or actively prompts Gofore employees to realise administrative tasks that nobody is keen to do but which are very important for keeping the self-guided organisation going. For example, Seppo prompts for unreported working hours, or advises an employee to take a break if they work too much. Based on the good experiences with Seppo, we have developed additional chatbots to help with everyday routine tasks. Gene, for example, makes a complicated flow of booking train tickets and reporting travel expenses in a simple conversational manner. Granny, for her part, is a laid-back office advisor, who can be consulted on general matters regarding the company.
Read more about the bots
What can be done to grow the intelligence of the cities? Meet us in Smart City Expo 2019 and let’s figure it out. Event details can be found here.
In an episode of America’s Got Talent, an aspiring stand-up comedian puts on a despondent expression and sighs “I had my identity stolen. It’s okay. They gave it right back.” That’s funny, right? Except for the fact that in the real world it isn’t funny at all. Your identity is your cultural, familial, emotional, economic, and social anchor, at the very least. So, if your identity is stolen, you don’t get it back just like that. It’s a personal integrity catastrophe that’s very hard to recover from.
eID Forum 2019 in Tallinn
eID Forum 2019 – Shaping the Future of eID took place on 17 and 18 of September in the lovely Hanseatic city of Tallinn. The aim of eID Forum was to bring together representatives from the public (governments) and private (industry) sectors. They achieved a presence of more than 300 participants from 34 countries to share ideas and emphasise the urgency of facilitating trustworthy civil and business digital transactions across national boundaries.
The Forum focused on high-level overviews of the processes and technologies used to develop contemporary eID solutions. We noticed a technology-centric focus and marketing of ready-made solutions. The focus topics for this year were:
- a cross-border digital standard for mobile driving licences,
- the future of digital borders, and face recognition and its use cases in airports, and
Should you be able to identify yourself or just your right to drive?
Huge efforts are being made to standardise driving licences that can exist in a form other than paper or plastic. But first, what is an eID (e-Identity)? An eID is a unique and immutable digital proof of identity for citizens and for organisations. One’s eID is a right and cannot be suspended or revoked because it is akin to, for example, one’s birth certificate. While an eID’s core attributes must be fundamentally self-sovereign and immutable, a wide variety of attributes can be granted to it and revoked from it. For example, privileges such as holding a driving licence. And one’s eID must be multimodal for use across a variety of digital identification-dependent systems and communication channels.
In Estonia, one does not have to carry any type of driving licence with him/her; not a physical or a mobile version of it. If it is possible to identify the person, then all the needed information can be checked digitally (the right to drive, having insurance, etc). This data exchange has, of course, to be done in a secure way. In Estonia, these data requests are done over the secure data exchange layer known as X-Road, which Gofore has had a role in developing since 2015.
Challenges with interoperability and rapid technological change
The idea behind eID standardisation and interoperability is that government services become more user-friendly, flexible, convenient, and resilient to many kinds of risks caused by otherwise divergent designs and implementations. But despite the fact that electronic identification is regulated in the EU by eIDAS (electronic Identification, authentication and trust services), its implementation in different countries is progressing at significantly varying speed and scope. There was recurring mention by presenters of the urgent need for eID standardisation, whether it be de jure or de facto, and the need for cross-border interoperability of the various eID solutions already in existence.
At the same time, the main challenges concerning eID are, in our opinion, twofold:
- Technological advancements for eID (including secure devices and identification capabilities like face recognition) are evolving fast and we do not know how regulation could keep up in this race.
- Did you know that according to the World Bank Group’s 2018 #ID4D Global Dataset, an estimated one billion people around the globe do not have an identity that they can prove? So for them, provable identity is not just missing in the digital society, it is totally missing. Since they face difficulties in proving who they are, they do not have access to services requiring digital identity. Might we consider having oneself listed in population registers as a human right?
Such fast-paced digital evolution as presented and debated at eID Forum is affecting every organisation in some way or another. We can help you rise to the challenges of fast digital change that you are facing your business domain. We can support you with organisational and technological change. All these consultations are available in Europe or world-wide through your nearest location: Estonia, Finland, UK, Germany or Spain.
The latest X-Road Community Event was a huge success. With 150+ participants from 22 countries, it is evident that the interest and tangible actions for enabling digital societies are hot topics among the nations worldwide.
The event was organised by the Nordic Institute for Interoperability Solutions (NIIS) who are developing and managing an open source data exchange solution called X-Road. X-Road is the basis for data exchange in the public administration in Estonia and Finland, both of whom are founding members of the organisation. Lately, Iceland and the Faroe Islands have also joined as partners – and various countries and regions in Europe, Africa, the Americas and Asia have run trials and adapted X-Road for their use. See the X-Road world map for details:
Currently, Gofore is the sole developer of the X-Road core for NIIS through a public procurement.
X-Road version 6 is deployed in Finland and Estonia, and Iceland will follow suit shortly. The Faroe Islands and some other countries are preparing to migrate their platform from version 5 to 6.
At the event plans for the development of the next version of the software, X-Road 7 Unicorn, were introduced and presented in various workshops by experts from Gofore, the Finnish Population Register Centre (VRK) and the Estonian State Information System Authority (RIA). NIIS CTO Petteri Kivimäki stated: “X-Road is not developed for us [NIIS] but for you [nations and organisations]”, so it is evident that close collaboration between the development of the core and existing and planned local installations is highly valued. The MIT-licensed open source software enables maximum utilisation and all users are welcomed to contribute and create pull requests for additional required features.
Planning to utilise X-Road?
If your country or organisation has various data sources and siloed services, taking X-Road into use will enable a fluent, fully secured and easily manageable solution to exchange data between sources. Such fluent data exchange enables endless possibilities for derivative machine-to-machine applications and easy cross-border data exchange between countries. Of course the ultimate target are smooth human-centric services for citizens, which often require additional trusted digital identity management system to be build alongside information systems connected by X-Road.
Gofore has experience and deep expertise in all layers of X-Road and digital identity design, development and deployment and we are looking to support their utilisation at an international scale.
If you want to hear more, please contact the author or download the leaflet below – it will provide more detail on why, what and how X-Road would help to achieve a digital society in your context.
Interested in reliable, secure and easy to use integrations for digital services? X-road provides this and more. Download our X-road leaflet to learn more about how this could be utilised in your business: X-road leaflet