Opetushallituksen Opintopolku on kansallinen verkkopalvelu, jonka avulla opiskelupaikkaa etsivät voivat etsiä ja hakea opiskelupaikkaa. Palvelu kokoaa myös kattavasti koulutuksen ja opetuksen järjestäjän palveluita yhdeksi saumattomaksi kokonaisuudeksi.
OPH:n palvelulla on käyttäjinään vuosittain yli 300 000 oppijaa ja yli 10 000 virkailijaa. Kokonaisuus koostuu useista kymmenistä taustajärjestelmistä ja tarvitsee modernin infrastruktuurin tuekseen. Alun perin perinteiseen konesaliin rakennettua palvelua lähdettiin uudistamaan pilvipalveluiden päälle. Goforella oli tekninen päävastuu monitoimittajaprojektissa.
”Opetushallitus suunnitteli jo pidemmän aikaa kustannustehokkaampaa ja modernimpaa kapasiteettiratkaisua moderniin palvelukokonaisuuteen. Kävimme läpi useita pilvivaihtoehtoja ja vertailimme niitä keskenään. Parhaimmaksi ratkaisuksi meidän tarpeisiimme valikoitui Amazon Web Services (AWS). Maailmanlaajuisesti AWS on tunnettu palvelu ja Suomesta löytyy riittävästi osaajia, mikä oli meille tärkeää”, kertoo OPH:n tietohallintojohtaja Erja Nokkanen.
Mittava siirto onnistui kupruitta
Opintopolun kokonaisuus siirrettiin onnistuneesti helmikuussa 2018 AWS-pilvipalvelun päälle. Siirtohanke oli haastava, sillä jatkuvassa käytössä oleva palvelu koostuu monista toisistaan riippuvaisista taustajärjestelmistä, ja tarjoaa kymmeniä ulkoisia integraatioita muun muassa Kelalle.
”Pilviprojekti aloitettiin elokuussa 2017 yhteistyössä Opetushallituksen, Goforen ja OPH:n palvelukokonaisuuden kehittäjien kanssa. Puolen vuoden projekti toteutettiin ketterästi ja joustavasti siten, että varsinainen siirto saatiin onnistumaan sujuvasti ja ilman ongelmia”, toteavat Erja Nokkanen ja erityisasiantuntija Mika Rauhala OPH:sta.
Suurien julkishallinnon järjestelmien joukossa Opintopolku on AWS-siirtymällään suunnannäyttäjä tulevaisuuden teknisiin ratkaisuihin. Keväällä 2018 pilvipohjainen palvelu pääsi heti kunnon testiin, kun kymmenet tuhannet hakijat jättivät hakemuksensa toisen asteen ja korkeakoulujen yhteishauissa.
Mitä uusi infrastruktuuri mahdollistaa?
- Joustavuutta kehitykseen
Joustavuuden ansiosta on mahdollista luoda helposti useita kehitysympäristöjä. Uusia sovelluksia ja tekniikoita voidaan ottaa käyttöön helposti käyttäen alustan valmiita komponentteja tai luoda itse uusia lisää.
Sovellukset voivat skaalautua automaattisesti kuorman mukaan: ylöspäin kun tarvitaan suorituskykyä, alaspäin kun halutaan säästää kuluja. Muutokset tapahtuvat päivien tai viikkojen sijaan minuuteissa.
Ympäristön ylläpidon kustannukset ovat matalammat kuin perinteisessä infrastruktuurissa. Palvelualusta huolehtii monista tehtävistä, joita perinteisessä konesalimallissa tehdään käsin. Myös ympäristöistä maksetaan tarkasti käytön mukaan. Jos esimerkiksi kehitysympäristölle ei ole käyttöä, se voidaan sammuttaa ja käynnistää uudelleen tarpeen mukaan.
- Infrastruktuurin toistettavuutta
Pilviympäristö mahdollistaa infrastruktuurin hallinnoinnin ohjelmakoodilla. Tällöin kokonaisuus on dokumentoitu, kaikkien osapuolisten saatavilla ja toistettavissa helposti. Myös muutoshistoria on nähtävillä helpommin.
Enää järjestelmäkokonaisuus ei ole musta laatikko, vaan infrastruktuuri on kaikkien ulottuvilla. Kehittäjät vastaavat tuotoksistaan koko ohjelmiston elinkaarelta, ja pystyvät tarvittaessa tekemään nopeita muutoksia järjestelmään. Aikaa ja rahaa säästyy, kun infrastruktuuri on lähellä kehittäjiä ja tuoteomistajia.
Gonference 2018 logo
On Ascension day earlier this year we held the first internal conference event at Gofore in Tampere, Finland. Traditionally in Finland Ascension Day has been a public holiday but this year it was a ’normal working day’ at Gofore. We decided that this former public holiday would be perfect for an event where colleagues could share their expertise in a safe and relaxing environment. The event was called ’Gonference’ and we had two teams: the organising and the technical team. The organising team’s main responsibility was to plan what kind of conference to hold. The technical team’s main responsibility was to plan what audio and video equipment would be used at the event.
In January, Jarno Virtanen and some others were gathering people together to organise our first internal conference at Gofore. We wanted to gather our colleagues together in the same place to share experiences and ideas because sharing knowledge over project boundaries is very important in a company like Gofore.
The recruitment post in Slack
During spring we had a few organising team meet-ups and we called on our colleagues for speakers for the event – finally, we made the decision that the Gonference would be 4 hours long and contain three different tracks: Dev, DevOps and Design&Leadership. The technical team brainstormed setups for each track and chose Bright Finland Oy as the supplier for some of the audio and video equipment. We also decided that all technical equipment would be installed one day before Gonference so we would have time to get stuff set up and any sort out any problems.
So decisions about the nature of the event, the place and the equipment required were made. Before the Gonference, the track hosts matched speakers with their presentations and other organisers were producing posters and carrying out other general tasks while the technical team gathered more people as helping hands. The technical team also held learning sessions with volunteers who wanted to learn how the live streaming works.
Second detailed training session about streaming audio and video
The Gonference day
On the Gonference day, we had a very positive atmosphere and everyone was aiming for the same goal – having a great learning event. In the morning organisers were checking that the tracks had all equipment and seats were in place. The event started at 12:50 with an overview of the day’s ambitious schedule. We also arranged two longer breaks during the day where we had some refreshments for all attendees.
In the Dev-track the speeches contained talks about Jakarta EE, microservice architecture, Rust, etc. The DevOps-track topics were about serverless infrastructure, Netflix OSS microservices stack, secure design, designing high-performance applications, etc. And the Design&Leadership-track contained topics like Design Sprints, technology with culture, designing against the norms, etc. Overall we had 18 presentations in our 4-hour conference.
On the next day, we started to collect feedback from the audience and we got positive comments and many ideas about how we could do it differently next year. The organisers also held a retrospective where we collected feedback from an organisers point of view.
The first speech in Dev-track at Gonference
The Gonference was a great learning session for all of us. From an organisers’ point of view, it was a huge success as the first big event in Gofore. This was a first time for many of us. For some speakers, it was the first time on the stage and they performed superbly! For some of the organisers, it was also the first time that they had organised this kind of event, but everyone was full of enthusiasm! The organising and technical teams were awesome and the event went off with high quality and professionalism. All of the teams showed huge enthusiasm and willpower to make this a successful event. We especially want to thank Bright Finland Oy for renting us audio and video equipment and they also gave great support to us!
We wanted to keep this conference as a private event as we were all were learning new things. The most important reason was that we wanted to offer a safe learning environment where everyone can participate without any judgement. However, we are now looking at possibly sharing some of the Gonference talks on our Youtube channel so check it out.
This was our first internal conference at Gofore and we managed to raise the bar high for the next event. At Gofore anyone can organise an internal or a public event if they want to and following the success of Gonference we anticipate many more events in the future.
Kalmar is a Global company providing cargo handling solutions and services to ports, terminals, distribution centres and heavy industry. Cargo handling and the container handling process is getting more and more automated. Looking at container terminal yards such as Hamburg or Rotterdam, it is clear that the automation of certain processes has already started.
The Gofore team consisting of Jonna Iljin, Joel Bergström and Christopher Klose participated in the Kalmar CoCreate workshop at the Terminal Operations Conference Europe in June 2018. During this conference within just 2 days, a new service idea was created. In this blog post, I will talk about what happened during this trip, what we did on the conference itself and what steps we have taken to reach our goal. This will give you a brief outlook on how you can enrich the creation of new services and how to look beyond the ”obvious”.
The so-called Automated Guided Vehicles (AGV’s) are just one example of automated machinery which is already getting tasks via navigation computers. AGV’s coordinate routes to ensure that vehicles don´t collide with each other on the yard. These AGV´s are driving within a predetermined zone which is fenced and restricted for humans to enter. This means that as long as a human doesn´t enter the designated zone, the container handling process will continue smoothly. However, as soon as a human enters the fenced area all machinery has to stop to ensure safety. This raises the safety level for maintenance workers if they have to work on a vehicle that has broken down, but nevertheless, accidents do happen and then the whole process has to stop.
The Gofore team were given the task to find solutions using the title, ”how to create secure working environments for people in the fully automated terminal yard”. Kalmar wanted to generate ideas and find solutions.
We were able to discuss and learn about the differences between manually operated and automated container terminals from container terminal managers, from our Kamar mentors, and from other stakeholders in the container supply chain. We also learnt about deep-sea terminals and how they differ to short-sea terminals in terms of their processes, security and safety measures.
Arrival in Rotterdam and Team Dinner at Restaurant Thoms
This was the first time for me in the Netherlands and it was clear right from the beginning that Rotterdam is different to other cities I have visited. The city centre of Rotterdam has an unusual architecture which draws attention to it and you start wondering if people actually live in these buildings. Also unexpected to me were the cyclists. I mean, the Netherlands and the Dutch are known for cycling a lot, but having separate cycle roads alongside ’normal roads’ is something which you don´t see too often in other countries. But even more fascinating to me was that no cyclist was wearing a helmet, which made me think ”is it safer to wear a helmet or to build separate cycle roads?”
In the evening we had our first official meeting at Thoms, a restaurant close to the city centre of Rotterdam. There we first met the organizers from Kalmar, the other two teams participating in the Co-Creation, and our mentors from Kalmar who supported us during these days. Our mentors also helped with many insights and helped us find people who could give us more information from different perspectives. Mingling with all these people and getting an idea of what Co-Creation means to Kalmar, it was clear that we were all excited about the end results which would be presented on Wednesday to the TOC audience.
After getting to know each other during the aperitif, we continued with a long and copious dinner. Salad, carpaccio and muscles came as starters and it already felt as if this would be the main dish, but we continued with wonderful tasting steak and gilthead. It tasted amazing. While eating we were able to share many stories and have discussions about stereotypes from different countries and if they are true or not. It already felt as if had been acquainted for more than just an evening and it got pretty late. But all evenings have to end at some point, so we got a Dutch dessert called ’warme brood pudding’ and had some rest before the real work started.
Beginning the journey
At 9 o´clock we had the first meeting of the day at the Co-Creation booth to go through upcoming events and the time we would have for ideating and preparing our pitch.
We even got amazing hoodies for the event! After that, we were able to prepare our material and get ready to dig into a new area which we haven´t known before.
But how did we start in this unknown area?
We knew we didn´t have much time so it was important to understand the whole container handling process quickly. Therefore we interviewed our mentors, terminal managers and other visitors to TOC Europe in the morning. We soaked up all the information and wrote as much as possible down on post-it notes.
I need to say this – it was a mess! So many insights, so many new terms and so many risks. After several hours of questioning and trying to understand what was happening, we needed a break. By now it was time for lunch, and breaks are always good if you have something to eat.
After lunch, it was time to structure all our information. We summarised our insights and with these insights written down, we were able to find bottlenecks and map these to the container handling process.
In the afternoon we had to start ideating what kind of approaches would be reasonable? What might be the right way? Which constraints would we face?
To not drift too far away with our ideas and visions, we constantly explained our ideas to our mentors and to terminal managers. This helped us to narrow down the options and look into the near future rather than a period 50 years from now.
After rushing through the day, the finishing time came quicker than expected and with it the feeling that a break to settle down all our thoughts was necessary.
We spent a relaxing evening in Rotterdam collecting our thoughts and running through our ideas, we knew that the next day would be tough!
Continuing where we ended the day before, final tweaks were made to our presentation in the morning and final interviews and discussions were completed to ensure our idea was correct. We prepared our pitch and started to align our speeches to form one cohesive proposal, and I have to say, Deadpool would have been proud of the growing hand in our speech.
Even though we only had a couple of minutes per team for our pitch, it was a nice experience and seeing so many people interested in our outcome gave us even more certainty that this was a great success. Many visitors and Kalmar members attended the final pitch and appeared to be deeply interested in our ideas.
To conclude the Co-Creation session, we were invited by Kalmar to their TOC after-party which was a blast. A great location, fantastic food and music and many more people who we were able to discuss our ideas and relax with.
There is only one thing left to say, a big thank you to all the Kalmar people who made this possible. It was an amazing experience, even though exhausting. Seeing so many people interested in thinking outside the box made it clear that there is still a lot of work to do. We are excited at the prospect of taking our ideas forward and helping shape the future with Kalmar – stay tuned for more blog posts!
Additionally, we have to say thanks to the great photography team who made not only beautiful pictures but also made a great video as well. Have a look here: TOC video
Why Do Threat Analysis
Well, why not? It can be that due to a lack of familiarity with or experience of doing threat analysis (aka threat modelling), there can be a certain reticence or fear of failing. But such reticence or fear of failing is unfounded. Firstly, you won’t be alone doing threat analysis as you’ll have the application domain experts with you; secondly, a rudimentary threat analysis is more useful than none at all; and thirdly, analysis competence will grow by following up on the rudimentary threat analysis in successive sessions.
The objective is secure-by-design (aka built-in security) and one of the ways to achieve that is to identify and mitigate design-related security vulnerabilities during the system or application design phase. The output from threat analysis will be a set of security requirements that must simply be fulfilled like all of the other requirements.
Note that security is one of the enablers of overall product and service quality, which customers and end users rate trust and reputation upon. Additionally, since the General Data Protection Regulation (GDPR) activated on the 25th of May, and since secure-by-design is an enabler of privacy-by-design, it is prudent to also query and scrutinise privacy risks and mitigations during threat analysis.
When To Do Threat Analysis
There is a timing element to threat analysis. Ideally, threat analysis is performed as soon as the architecture has been established and is sufficiently mature. It is critical to understand that, no matter how late in the development process threat analysis is performed, it is critical to understand weaknesses in the design’s defences.
The fiscal and temporal cost of addressing security issues will generally increase if the design phase misses security weaknesses, so it is much more useful (and more cost-effective) to begin the process of identifying potential attacks and associated mitigations as early as possible.
A threat model should begin when the major structures and major components or functions of the architecture are mature. Note that threat analysis is a useful exercise regardless of how close the system is to deployment or how long the system has been in use; but as said, the earlier, the better. Successive development cycles, as in the norm in Agile development, should include focused threat analyses when design changes are made and to gradually mitigate security risks that the system currently carries. Also note that there is a distinction between end of development and end of support, and even when active support has ceased, a proper threat model will bring clarity about the possible flaws in the system.
Who Does Threat Analysis
Optimally, the product owner, lead architect, lead developer, lead tester, and lead user experience designer should together perform threat analysis with advice and facilitation from a security advisor or consultant. For GDPR aspects there may also be involvement of the Data Protection Officer, who has formal responsibility for data protection compliance within an organisation. As well as having the various insights from each of these competence areas, it is a personal opinion that this cross-competence representation ensures a greater probability of broader security awareness exposure and sharing throughout the development team(s).
The Link to Requirements Engineering
Threat analysis should be planned and applied in conjunction with requirements engineering; in this way, the security requirements can be derived as an integral part of the overall requirements derivation process. The security requirements derived from threat analysis should be linked to related business case epics, use cases and user stories so that they simply become ”requirements amongst others”, which helps to dispel some of the unfounded reticence and mystery around security requirements.
How to Do Threat Analysis
How do you want to do it? There’s no single perfect way to do threat analysis; open source and proprietary methods exist. But they all similarly strive to answer the following questions:
- What assets require protecting?
- Who and what are the threats and vulnerabilities?
- What are the controls to mitigate the threats and vulnerabilities?
- What are the implications of not securing assets?
- What is the value to the organisation?
A good threat analysis methodology is one that identifies potential security vulnerabilities and threats, and as an exercise is time-optimised, manageable and repeatable. Project artefacts used for threat analysis include (and are not limited to):
- Architectural diagrams (component and logical models)
- Data-flow diagrams (DFDs)
- User stories and use cases (and related misuse and abuse cases)
- Mind maps
- UI designs
Think like the attacker to understand the relevant threats to both security and to privacy. Use defined security questions to ensure sufficient threat landscape coverage. Use the following categories to understand who might attack the application:
- An ordinary user stumbles across a functional mistake in your application while using a web browser and gains access to privileged information or functionality.
- Programs or scripts that search for known vulnerabilities and then report back to a central collection site.
The Curious Attacker
- A security researcher or ordinary user who notices something wrong with the application and decides to pursue further.
- Rebellious types seeking to compromise or deface applications for notoriety or a political agenda.
The Motivated Attacker
- A disgruntled employee or contractor with inside knowledge or a paid professional attacker.
- Criminals seeking high stake pay-outs by cracking high-value organisations for financial gain.
Apply STRIDE To Classify Identified Threats
STRIDE is a classification scheme for characterising known threats according to the kinds of exploit that are used (or motivation of the attacker). The STRIDE acronym is formed from the first letter of each of the following threat categories:
- Identity spoofing is a risk for applications that have many users but provide a single execution context at the application and database level. Users should not be able to become another user or assume the attributes of another user.
Tampering with Data
- This is about how an application deals with untrusted data – treat all data as untrusted anyway! Examples of tampering are changing GET or POST variable values directly in the URL address bar, failing to check data integrity, and deficient client-side and server-side data validation. In the worst case, you can call it data sabotage. This relates to threats #1 and #7 in the Open Web Application Security Project (OWASP) Top 10 2017.
- Users may dispute transactions if there is insufficient logging and auditing of user and system events, or if you cannot track user activities through the application or from the audit trail. A properly designed application will incorporate non-repudiation controls such as tamper-resistant logs; it should not be possible to alter logged events. A note of caution here: in terms of the GDPR, only the required data to facilitate audits should be logged; do not log personal data if there is no explicit requirement to do so. This relates to threat #10 Insufficient Logging and Monitoring in the OWASP Top 10 2017.
- There are basically two types: a data leak, or a privacy breach. A data leak can be a flaw or be a malicious attack. The most common flaw is simply not using Transport Layer Security (TLS) and not enforcing HTTP Strict Transport Security (HSTS). This flaw provides entry points for attackers to steal keys, execute man-in-the-middle attacks, or steal clear text data from the user’s client (e.g. browser) and data in transit. Concerning privacy breaches, in the context of the GDPR due caution is required concerning the protection of personal data from accidental or unauthorised disclosure.
Denial of Service
- The Denial of Service (DoS) attack is focused on making a resource (site, application, server) unavailable for the purpose it was designed. There are many ways to make a service unavailable for legitimate users by manipulating network packets, programming, logical, or resources handling vulnerabilities, among others.
Elevation of Privilege
- Access control enforces policy such that users cannot act outside of their intended permissions. Flaws typically lead to unauthorised information disclosure, modification or destruction of all data, or performing actions outside of the bounds authorised by the administrator for the user. It is essential that the user cannot manipulate the human-computer interface (HCI) to elevate his/her role to higher privilege roles. This relates to threat #5i Broken Access Control in the OWASP Top 10 2017.
For further types of flaw-exploited attacks, you can refer to the OWASP Attack Categories.
Apply DREAD to Quantify Threats
DREAD is a classification scheme for quantifying, comparing and prioritizing the amount of risk presented by each evaluated threat. The DREAD acronym is formed from the first letter of each category below.
DREAD modelling influences the thinking behind setting the risk rating and is also used directly to sort the risks. The DREAD algorithm, shown below, is used to compute a risk value, which is an average of all five categories.
DREAD_Risk = (DAMAGE + REPRODUCIBILITY + EXPLOITABILITY + AFFECTED USERS + DISCOVERABILITY) / 5
The calculation always produces a number between 0 and 10; the higher the number, the more serious the risk. Here is one way to quantify the DREAD categories:
- If a threat exploit occurs, how much damage will be caused?
- 0 = Nothing
- 5 = Individual user data is compromised or affected
- 10 = Complete system or data destruction
- How easy is it to reproduce the threat exploit?
- 0 = Very hard or impossible, even for administrators of the application
- 5 = One or two steps required, may need to be an authorised user
- 10 = Just a web browser and the address bar is sufficient, without authentication
- What is needed to exploit this threat?
- 0 = Advanced programming and networking knowledge, with custom or advanced attack tools
- 5 = Malware exists on the Internet, or an exploit is easily performed, using available attack tools
- 10 = Just a web browser
- How many users will be affected?
- 0 = None
- 5 = Some users, but not all
- 10 = All users
- How easy is it to discover this threat?
- 0 = Very hard to impossible; requires source code or administrative access.
- 5 = Can figure it out by guessing or by monitoring network traces.
- 9 = Details of faults like this are already in the public domain and can be easily discovered using a search engine.
- 10 = The information is visible in the web browser address bar or in a form.
Implement Threat Mitigations
Fortunately, there already exists a body of industry-scrutinised application security requirements known as the Open Web Application Security Project Application Security Verification Standard (OWASP ASVS) that can be readily applied as security requirements for mitigating identified security weaknesses and threats.
Verify Threat Mitigations
An OWASP Testing Guide will help developers and testers to verify implemented security mitigations to the accepted risk level. By ”accepted risk level” is meant that there may be residual risks that will be too expensive to mitigate beyond the value of the business case; project management or owners must be prepared to undersign residual risk acceptance.
Penetration testing should support verification and automated security testing should also be implemented to the Continuous Development/Continuous Integration pipeline using e.g. Checkmarx and Open Source Analysis security verification tools.
Also, verify with the Data Protection Officer that any identified privacy-associated risks have been sufficiently mitigated to the accepted risk level.
I think this is the most important advice; without the curiosity, threat analysis might not happen. As said at the beginning, don’t postpone threat analysis; be curious, jump right in, and learn it by doing it.
OWASP Application Security Verification Standard (ASVS) is an industry-respected open-source framework of security requirements that MUST be incorporated when designing, developing, testing and deploying modern web applications for digitalised environments. It provides the security verification requirements to address your defined security questions.
OWASP ASVS Level 2 ensures that business-level security controls are in place, are effective, and are used by business-critical applications to defend against vulnerabilities and against attacks. Security threats to Level 2 applications will typically be by skilled and motivated attackers who focus on specific targets using tools and techniques that are highly practised and effective at discovering and exploiting weaknesses within applications.
The objective is that applications are secure-by-design and secure-by-default.
Secure Design is King
OWASP ASVS requirements should be applied as a customisable security blueprint for identifying the relevant security requirements according to identified potential threats to business entry points, gateways, critical assets and credentials. The OWASP Top 10 provides a starting-point high-level summary of the very minimum security coverage required.
Bake The Security In
For each application, required OWASP ASVS requirements should be pinpointed during feature design and epics and stories planning in order to ensure that the required security controls are part of development from the outset.
How to start
The development team scrutinise the intended application or service design using the defined security questions as a guideline. This will identify the entry points, boundaries, components and interconnections and that are security-relevant.
The application team can then utilise the OWASP Application Security Verification Standard (ASVS) requirements to produce security epics and stories that can be managed as Jira tickets,
Do not expect perfection from the beginning – getting security right is difficult and it is a learning-by-doing experience – but doing secure design and development in a structured and traceable way using industry-respected methods and materials is already a good start.
Others: ”Oh, sorry we didn’t realise, we will change room as soon as possible”
What follows is a period of chaos in the meeting room, when people are gathering their stuff and quickly cleaning places. The situation has a few possible endings: Your own meeting starts late, the room is a mess, the room does not contain any fresh air or worse – something else.
Then the previous meeting moves to another room without checking or reserving it from the calendar.
This is a simple example but one that happens all too often, there are plenty of other examples around that and I know or hope where meeting organisers are a few minutes early and check that the room is nice and tidy. I’ve faced that kind of situation a few times in my career and I’ve also been on ’the other side’ in the story, in the wrong place at the wrong time. I believe that I’m not alone and there are tons of people who have been on both sides of this story.
At Gofore, we have open-plan offices and dozens of meeting rooms. We want to use these rooms as efficiently as possible and use them so we don’t disturb others in the office – like having a phone call or a normal reserved or a temporary meeting. But the problem is that finding and reserving a meeting room takes some time and sometimes you don’t want to spend time doing that.
We started having several ”corridor discussions” about what are the main problems with reserving rooms. Some said that Outlook calendars are way too complex to find any proper room and reserving a room just for 15 minutes takes too long – many other problems were raised too. A few people (including me) started brainstorming what we are missing and what we could do better. We brainstormed for about 30 minutes about what kind of a system we would need, we also checked what off-the-shelf (OTS) solutions existed. After brainstorming, we made a collaborated decision to build our own meeting room reservation system in part, because none of the OTS solutions offered all our requirements which were:
- An information screen outside the meeting room – This was the number one requirement because many of us want to reserve a room for a few minutes but we don’t want to make a reservation from a laptop, for example when an urgent phone call is received.
- Finding a meeting room that meets the organiser’s requirements – We have some variances in our meeting rooms eg. what equipment is available. Also the room capacity is very important for those reserving meeting rooms.
- Low cost – Most commercial solutions offer annual subscriptions and this can be very costly when we now have around 40 meeting rooms in Finland alone.
- Office 365 (O365) integration – Some free solutions offer only Google Calendar integration and most commercial solutions contained multiple integrations like O365.
We agreed that these needs must be fulfilled and additionally we realised that our system must be centralised as a web service and it should provide a front-end, so we would have easier remote management. I took the lead, kick-starting this project because this was a good start for my study of new technologies and it would strengthen my ’old knowledge’ – most importantly I was very interested to take part in this internal project.
Steps to a working meeting room system
I’ve been a Java developer for several years so I chose Java 8 with Spring Boot as a backend technology and I needed only few development hours to get the first version of our meeting room reservation system up and running. As a frontend technology, I chose React because I wanted to learn it and React is one of the common technologies which we are using for Gofore’s customer projects. I also wanted to learn how to develop a responsive web page which would work on desktop and mobile devices, so this was a really good chance to learn something new.
The very first version of meeting room system showing only two meeting rooms with one-day events in a list
During last autumn and spring, we took big steps to achieve our goals – the meeting room system is now deployed on Amazon Web Services Elastic Compute Cloud (AWS EC2) and our system will get new rooms and calendar events via Microsoft Graph API. Whenever there are any updates to the system, desktop and mobile views will automatically refresh their content. Amazon S3 service is used for storing published versions of the meeting room system, so EC2 will always get the latest version when we upgrade it.
When the first meeting room tablets were installed, the reception was highly positive, we got some good feedback and ideas from colleagues. I’ll return to the ideas later. There are still some improvements to be made, like in the visualisation. For example, now the tablet’s texts are not scaling properly and only reserving in 15, 30 or 60 minutes time slots – somebody may want to reserve 25 minutes. Nevertheless, everyone in Gofore can use the meeting room system on their laptop or mobile and take a quick look at all reservations on a timeline view or open a meeting room’s tablet view to make a quick reservation. Tampere and Helsinki offices have now dedicated meeting room tablets in each location and Jyväskylä is getting their own tablets sooner or later. We will expand to our offices in Germany Spain and the UK in due course.
So we have now got a centralised reservation room system which is integrated to O365 and can be used on a desktop or a mobile device, plus several dedicated meeting room tablets which will show the current reservation information for a room. Let’s calculate costs:
When starting a new internal project, it is important to keep in mind that it is never free and there must be an important reason not to choose an OTS solution. For this project, there was no OTS solution that fitted our needs, importantly we now have a dedicated platform, where we can learn new technologies and adapt them. When people at Gofore want to learn something new and they don’t have a live customer project, they can contribute to this or any other internal project, or they can study new technologies. Remember, continuous learning in the technology sector is always valuable work for the future.
The cost of commercial OTS products range from about 15e-20e / month / room, so our predicted costs for the first year is a little above this. I didn’t take into account any maintenance costs, because I believe that commercial products will have about same maintenance costs as our solution. When we are calculating predicted costs for 2 years, then the price drops to about 13e / month / room. I would say that our need to keep costs as low as possible are fulfilled over the long term.
A meeting room tablet in Tampere
There is only one need that we haven’t fulfilled yet: we are missing room information and a search functionality with parameters. Nevertheless, I believe that this will help us to manage our meeting rooms for a while. At least colleagues have given many positive comments and new ideas about what features the meeting room system should have. Some of the ideas:
- Usage statistics – Statistics on which meeting rooms are booked most and what is the usage percentage of each room.
- Office map – Interactive maps of our premises containing labelled meeting rooms and important places in our office.
- Measuring air quality in a database – We need air quality statistics to check that we are continuously having fresh air in our meetings and we could calculate the optimal human capacity of a meeting room. We could also use that information when we are building completely new meeting rooms.
Every Goforean can contribute to this project, which could be brainstorming, coding or designing new features.
First things first – Gofore is all about a caring, daring, community and especially about taking responsibility. We care about each other and we want to help others with their problems. We are daring, we care about problems and we solve them in the most effective way possible. As a community, we might end up innovating new things together that would help our people or customers, without forgetting the power of collaborative decision making. But there is always a catch – it is our responsibility to design and build solutions, so we always have to do our best.
In this case, we have turned an annoying problem into an innovative solution – although it is not the only one of its kind, for now, this solution fits for our needs. The Gofore meeting room reservation system can still have many different features and it will help us to make even better and more efficient meetings in our offices. At least I’m very excited about measuring meeting room air quality because bad air will affect our health and some people may lose their focus at the meeting.
If you have any question about this system, post a comment below. If you are struggling with the same kind of situation, we can always help you.
Goforella jatkuu kesäkuun loppuun asti rekrytointikampanja, jossa työsopimuksen allekirjoittaneelle annetaan 1 500 euron arvosta yhtiön osakkeita. Palkkion myötä uudet työntekijät otetaan yhtiön omistajiksi.
”Halusimme tarjota uusille työntekijöille säväyttävän ensipuraisun ja konkreettisen samaan veneeseen astumisen kokemuksen. Goforen työntekijöistä noin 70 % on yhtiön osakkeenomistajia, ja halusimme antaa uusille työntekijöille saman mahdollisuuden”, kertoo toimitusjohtaja Timur Kärki.
Rekrytointikampanjassa nostettiin esiin Goforen kulttuuri, arvomaailma ja yhtiön rooli alansa suunnannäyttäjänä. Kampanjaimunkin myötä kymmeniä uuden ajan it-tekijää on allekirjoittanut työsopimuksen Goforelle tämän vuoden aikana.
Itselle ahkeroiminen on monelle tuoreelle goferelaiselle uusi fiilis. Monet ovat sijoittajina ensikertalaisia. Uusien työntekijöiden ajatukset ja tunnelmat niin kampanjasta kuin työn aloituksestakin ovat olleet erittäin myönteisiä. Monet ovat sanoneet, että on hienoa, että Goforella tehdään asioita toisin – ja yrityskulttuuri on se mikä meille vetää.
“Goforella we walk the talk. Toimintamme keskiössä on muutoskyvykkyys asiakkaidemme parhaaksi. Rekrytointikampanja varmasti vahvisti työantajamielikuvaa – se tukee sitoutuvaa työkulttuuria”, kertoo toukokuussa Goforella aloittanut johdon neuvonantaja Jere Talonen.
Vastaavanlaisista fiiliksistä kertoo myös toukokuussa aloittanut ohjelmistokehittäjä Jere Peltola: ”Hienoa päästä mukaan sekä työntekijäksi että omistajaksi yhdellä iskulla. Osakkeet ovat hyvä tapa sitouttaa porukkaa Goforeen ja sen menestykseen”.
Poikkeuksellinen työntekijälähtöisyys vahvasti läsnä myös hallitustyöskentelyssä
Goforen arvoissa työntekijät tulevat ensin.
”Goforella niin kulttuurin kuin yhtiön kehittäminen ja vaaliminen koetaan hienolla tavalla yhteisenä asiana. Totta kai tämä heijastuu myös hallitustyöskentelyyn”, Kärki kommentoi.
Goforella on ollut perinne nostaa henkilöstöstään jäsen ehdokkaaksi yhtiön hallitukseen. Viime vuonna hallitukseen nousi tekninen projektipäällikkö Anne-Mari Silvast. Häntä ennen kahdella edellisellä kaudella valituksi tuli palveluarkkitehtina työskentelevä Niko Sipilä. Maaliskuussa pidetyssä yhtiökokouksessa ehdolla ollut Silvast valittiin jatkoon.
”Tietääkseni Goforen malli, missä työntekijät valitsevat keskuudestaan henkilön vaalilla ehdokkaaksi hallitukseen, on Suomessa melko ainutlaatuinen. Goforen tapa heijastaa luottamuksesta henkilöstöön ja osoittaa toisaalta, että erilaisiakaan mielipiteitä tai taustoja ei pelätä”, Silvast tiivistää.
Yhtiökokouksen vahvistaman, työntekijöiden valitseman ehdokkaan myötä Goforen hallitustyöskentely on saanut uusia sävyjä.
”Tuon henkilöstön näkökulmaa mielelläni esiin hallituskokouksissa, mutta olen kuitenkin päätöksissä henkilökohtaisesti vastuussa. Päätökset ovat siis omiani, ei niin että toisin henkilöstön toiveet sellaisinaan kokouksiin”, Silvast kertoo.
”Toki pystyn kahvipöytäkeskusteluissa haistelemaan henkilöstön fiiliksiä monista asioista ja pyrin tuomaan sitä näkökulmaa myös hallitukseen”, Silvast jatkaa.
Goforen poikkeuksellinen ja toimintatapoja uudistava hallitustyöskentely on saanut tunnustusta muun muassa vuonna 2016 Hallituspartnerit ry:n Kultainen nuija -huomionosoituksella.
Lisätietoa Goforen rekrytointikampanjasta ja tulevien omistajien työfiiliksiä: https://gofore.com/menestys-tehdaan-yhdessa/
Timur Kärki, toimitusjohtaja, Gofore Oyj
p. 040 828 5886