The GraphQL Finland 2018 conference was held recently (18-19.10.2018) at Paasitorni and was the first event of its kind in Finland. The conference brought a day of workshops and a day of talks around GraphQL. It was organized by the same people as React Finland as the good organisation showed. The talks were interesting, the venue was appropriate, food delicious, the atmosphere was cosy and the after party was awesome. Gofore was one of the gold sponsors and organized the afterparty at Kamppi.

All of the talks were live streamed and they are available on Youtube. I was lucky to get a ticket to the event and be able to enjoy the talks live. Overall, most of the talks were easy to comprehend although I only had a little experience with GraphQL through experiments and what I had learnt a couple of months ago at the React Finland 2018 conference.

“GraphQL is an open source data query and manipulation language, and a runtime for fulfilling queries with existing data. It was developed internally by Facebook in 2012 before being publicly released in 2015. It provides a more efficient, powerful and flexible alternative to REST and ad-hoc web service architectures. It allows clients to define the structure of the data required, and exactly the same structure of the data is returned from the server, therefore preventing excessively large amounts of data from being returned. – Wikipedia

You can also read the organizer’s summary of the event and check out the photos.
the GraphQL team on stage

(Life is hard, learning GraphQL easy)

Notes from the conference

The talks at GraphQL Finland were quite fast paced and more like lightning talks compared to the React Finland event: it was quite tough to digest all the new information. Fortunately, the talks were recorded so you can concentrate on interesting and relevant topics and get back to others later. Also, the sponsor’s lounge by Gofore and Digia provided a nice relaxing space to get your thoughts together. I have to say, Digia’s Star Wars Pinball machine was quite fun (smile)
The talks covered different aspects of GraphQL and surrounding topics in details. Here’s my notes from the talks which I found most interesting and watched live at the event.
Goforeans in the sponsor lounge

(Goforeans in the sponsor lounge)

Goforeans challenging attendees to foosball

(Goforeans challenging attendees to foosball)

Adopting GraphQL in Large Codebases – Adam Miskiewicz

The event started with Adam Miskiewicz’s story from Airbnb and incrementally adopting GraphQL. It’s simple to start using GraphQL in your project but adding it incrementally and carefully in huge codebases powering large distributed systems is not quite as straightforward. The talk dived into how Airbnb is tackling this challenge, what they’ve learned so far, and how they plan to continue evolving their GraphQL infrastructure in the future. Towards GraphQL Native!

Going offline first with GraphQL — Kadi Kraman

Kadi Kraman from Formidable Labs talked about going offline first with GraphQL. She did a nice interactive demo with React Native and Apollo 2. Users expect your mobile app to work offline and the tooling in GraphQL makes it reasonably straightforward to get your React Native app working offline. Slides

“Do this as you go and offline comes almost as a side-effect”

Life is hard and so is learning GraphQL — Carolyn Stransky

Life is hard, without documentation. Carolyn Stransky presented her story of ups and downs when learning GraphQL and documentation’s role in it. The problem with GraphQL is that – because there’s no “vanilla” GraphQL – there’s no central hub for all of the information and tooling necessary to learn it. It’s under-utilised and scattered throughout our community. The talk touched on how to better enable GraphQL docs for learning and comprehension and the slides pointed to good resources.

Database-first GraphQL Development — Benjie Gillam

Benjie Gillam from PostGraphile taught how a database-centric approach to GraphQL API development can give your engineers more time to focus on the important parts of your application. Adhere to GraphQL best practices, embrace the power of PostgreSQL, and avoid common pitfalls. Interesting slides.

graphql-php — Christoffer Niska

Christoffer Niska gave some good tips for software development: Don’t over-abstract, test everything, use static type checking, follow best practices, don’t prematurely optimise.

The (Un)expected use of GraphQL talk by Helen Zhukova showed the benefit of a single code base on the client and server side. Partly live coded with i.a. CodeSandbox. The any DB, in this case, was MongoDB.

Mysterious closing keynote — Dan Schafer

The mysterious closing keynote was Dan Schafer talking about GraphQL history, present and future. “Strive for single sources of truth”. Still lots of things to do in the ecosystem.


The last chance to practice your Finnish was at the Afterparty  🎉  at the Gofore office!

”Someone said your afterparty was the best conference party ever :)”


Foosball was popular also at the afterparty.



Marko Wallin

Marko Wallin työskentelee ohjelmistosuunnittelijana Goforella ja muuttaa maailmaa paremmaksi digitalisaation keinoin. Hänellä on vuosien kokemus ohjelmistokehityksestä, ketteristä menetelmistä ja ohjelmoinnista sekä käyttöliittymän, taustapalveluiden että tietokantojen osalta. Vapaa-ajallaan Marko jakaa teknistä osaamistaan blogiensa kautta ja kehittämällä muun muassa avoimen lähdekoodin mobiilisovelluksia. Sovelluskehityksen lisäksi hän harrastaa maastopyöräilyä.

Piditkö lukemastasi? Jaa se myös muille.

Noin 17 miljardin euron terveysbudjetista osa kuluu vääjäämättä potilaiden hoidon sijasta järjestelmän ongelmien ratkomiseen – emme vain tiedä kuinka suuri osa. Häiriökysyntä on keskeinen käsite sote-palvelurakenteen kokonaiskustannusten ymmärtämisessä.
Terveydenhuollon kustannusten jatkuva nousu ei välttämättä johdukaan väestön ikääntymisestä, vaan kustannuksia kasvattaa palvelurakenteesta johtuva häiriökysyntä (failure demand).
Häiriökysyntä on allokatiivisen ja teknisen tehokkuuden näkökulmasta erillinen, palvelutuotannon kokonaisjärjestelyyn liittyvä tekijä.
Goforen Digisote-tapahtuman puheenvuorossa tiistaina 6.11. klo 13.15 tarkastellaan häiriökysynnän käsitettä, esimerkiksi mistä se syntyy ja miten sitä voidaan vähentää.
Tervetuloa keskustelemaan kanssamme lisää Digisote-tapahtuman aikana tai voit ottaa yhteyttä myös ennen tapahtumaa
Lisää tietoa Digisote-tapahtumasta


Hermanni Hyytiälä

Linkedin profile

Piditkö lukemastasi? Jaa se myös muille.

Know thy Platform

Platform conventions

If you have any hands-on experience developing a new software/hardware platform, you know how laborious a task it is. Developers, engineers, designer, usability experts and managers (amongst others) can spend gigantic hours on fine-tuning the subtle interplay of hardware components’ combined performance, UI interactions, animations, branding, usability and foremost – the platform.
Upon release, besides the huge marketing budget, documentation is one of the key aspects to make or break a new platform. See for example the iOS Human Interface Guidelines. Documentation quality has improved; this is a blessing of the 2010’s and the Internet, as the competition against other platforms is a make-or-break. Do not think that you are a platform expert after studying it though, as you might end up trying to come up with custom conventions / UI patterns that no user is familiar with. Trust the guidelines that the platform developers have provided for you. Do not re-invent the wheel.

Platform features

Each platform comes with unique features; unique does not always mean great. It’s your job as a designer to harness the crème de la crème of each platform to suit your specific case. Understanding and unveiling the potential of a set of features each platform provides – that is the truest sign of any great designer.
Apple watch
Examples of platform features:

I’m not going to explain how to get potential from each of these in detail now. Just understand them profoundly. Besides, platform features are an easy thing to spot – They are usually used in marketing materials / technical specifications of any product.


The fine-tuned staccato  to make the platform reverberate needs to be harnessed in order to design the most functional and engaging experiences that take advantage of the platform in full capability.
Most platforms come with pre-installed applications – study them in detail and try to understand why certain decisions were made. If you are new to the platform, first try to design them in wireframes by guessing what the application does, only based on the icon of the app. Discussing with other bright minds can be helpful here. After making your own rough paper prototypes of the same apps – compare them with the apps provided.
Helpful resources for understanding the platform do exist most times you can find interviews, videos and blog posts online about those exact design decisions. That is just a part of how marketing works nowadays. The platform developers want to explain why they made specific decisions and what criteria were considered. Understand, though, that those 2-3 min videos may be an overview of a 12-month project. Go further and dig into the specific people interviewed, they usually have great Behance profiles, blogs, Twitter feeds or Github profiles.
Benchmarking is like listening, where you learn from other peoples’ knowhow. Stick with top rated applications for the platform, as their user value is confirmed by the users themselves. Also, big players are good at making great software. (AirBnB, Spotify, Facebook, Google, EA,…)
If the platform is brand new, consider similar platforms and their conventions – make your best guess on which of these to use in your specific case. Iterate a lot on the design by making rapid prototypes and test with real end-users. Build – Measure – Learn (- Pivot or Persevere).

Interacting with the platform

After you have familiarized yourself with the platform and maybe drawn some rough wireframes of your app, you can start planning the interactions themselves. To avoid cognitive load, you should keep to platform conventions and utilise users’ pre-existing knowhow. This means taking advantage of common human skills and reusing domains skills. The less you force the user to learn new things, the less annoyed she is.
It is not enough to know what looks good, but a well-documented platform also contains examples of dos and don’ts and explains when a specific UI pattern is relevant. See for example documentation of Material Design. It has plenty of case-by-case examples.
A well-documented platform will also provide alternatives to a specific UI component. Try learning the major components well and understand their most potential use cases – This makes it easier to pull out a component for a specific need.
Your customer might say: ”Display a list of PS4 games and open each of them”. Basically, this should translate in your ears into: ”I need a component that is able to display multiple elements in one view and allows the user to view a detailed view of each element”. 
If you did your homework, finding a component like this from the platform pattern library should be rather easy.
Again, take advantage of all possible interaction possibilities of a platform, understand the context and go ahead – make that great interaction.

To be continued – This was part 1 in a series of upcoming posts on the topic: ”Design Essentials: How to Prosper on Every Platform”. 


Esa Juhana Lahikainen

Piditkö lukemastasi? Jaa se myös muille.

Get ready for impact
The use of cloud services and web technologies means that it is no longer important where your physical base is located. Technology is changing every aspect of our lives and the ability for people working within both government and private organisations to work remotely is changing the way we do business (for the better). This sociological impact means that designers, developers and business professionals who work remotely can have a direct impact on both the environment around them and the wider world.
Adopted globally
This decentralisation of knowledge and work has disrupted many industries and price is often the first thing to change. A coder based in a low-cost region such as Asia can undercut his peers in California or Western Europe. Real time working tools such as Jira boards, Trello and Slack mean that teams can work effectively and efficiently wherever team members are located.
Add culture to this globalised decentralisation and the mix becomes significantly richer. I have first-hand experience working with clients in some of the most multi-cultural cities in the world such as Amsterdam, London and Dubai. Multi-cultural teams look at challenges through different lenses. It is natural to have inbuilt bias even if you don’t realise yourself. Where you were brought up, where you went to college and your circle of friends all influence your outlook on life and how you approach a challenge. Bringing together diverse cultures encourages discussion, it helps to produce a richer all-inclusive solution. Digital products and services are used by humans and so a rich all-inclusive product or service is more likely to be successful.
Culture promotes creativity
It is often said that travel broadens the mind however people who travel but fail to engage with the local culture have less of a creative boost than those who immerse themselves in it.
Dubai Future Accelerators

Lee and Abs from Dubai Future Accelerators

There are many examples of companies who have failed in their attempt at doing business in a new territory because they haven’t recognised the cultural differences. If you want to succeed in developing a product or service internationally you need to consider many things including, language, religious beliefs and different ways of working in order to build a robust and mutually beneficial business strategy.
Understanding your audience
Body language, casual and business etiquette, transparency and respect are some of the first things that I personally consider when approaching new international markets. It is important to recognise the cultural differences and ways of working, but more importantly to respect these differences even if you do not yet understand them. Change impacts all of us, it’s just that some adapt to it easier than others. Whether one shares the same beliefs or not, if you are to achieve success in international business a mutual understanding is imperative. Relationship building blocks are a foundation for growth. It is my belief that strong international collaboration between diverse cultures can and will produce better products and services and allow people to achieve more.
Take every opportunity to learn, understand and appreciate as many different cultures as you can in your lifetime. You never know where the creativity can lead you. Disruptive innovation comes from all corners of the earth. Often a problem in the most challenging environment can create the most life-changing solution.

Dubai Future Accelerators (DFA)

In September 2018 Gofore were selected to participate on the DFA program. This program is designed to bring together companies from across the Globe to co-create products and services with the aim of helping the Dubai Government entities face challenges of making Dubai the City of the Future. Working with the DFA and various government authorities has highlighted the importance of appreciating and embracing cultural differences. I have been working with people from all walks of life from university students to senior government officials all of whom share a passion for improving peoples lives through digital. This phase of the DFA program draws to a close in late November – check out my next post to learn more about some of the exciting solutions that will be developed as a result of the program.
You can read more about the Dubai Future Accelerator program here:  The Dubai Future Accelerator Program


Lee Davies

Lee is an extremely motivated and multi-skilled businessman, with experience of working in a variety of industries. He has experience of working with reputable brands throughout Europe to provide products, systems and services that implement change and build businesses. His focus is on business analysis and strategy, sales growth and management, consultancy, project management and strategic commercial initiatives.

Piditkö lukemastasi? Jaa se myös muille.

Vue tips

What Is Vue CLI?

Vue CLI (version 3) is a system for rapid Vue.js development. It’s a smooth way to scaffold a Vue project structure and allows a zero-config quick start to coding and building. Vue CLI Service, that is the heart of every Vue CLI app, neatly abstracts away common front-end development tools such as Babel, webpack, Jest and ESLint, while still offering flexible ways for configurability and extensibility as your project grows.
Let’s go through a few tips that’ll help you get even more out of your Vue CLI App.

1. Code Splitting And Keeping Bundles Light

Large Vue apps usually use Vue Router with multiple routes. Individual routes might also use various node modules. With Vue Router and webpack’s support for dynamic imports, routes can be automatically split into separate JavaScript and CSS bundles and it’s easy to do, for example in your router.js.

  name: 'profile',
  path: '/profile/:user',
  component: () => import('./views/Profile.vue')

Code splitted routes are loaded only on-demand, so it can have a major benefit on the initial loading time of your app.
Vue CLI also comes with Webpack Bundle Analyzer. It offers a nice birds-eye view of the built app. You can visualize bundles, see their sizes and also the size of modules, components or libraries they consist of. This will come in handy when Vue CLI warns you about bundle sizes getting out of hand, giving you some hints where to trim down the fat.
Vue CLI Service provides an extra --report argument for the build command to generate the build report. Add this handy little snippet in your package.json’s scripts-section:

"build:report""vue-cli-service build --report"

Running npm run build:report you’ll get report.html generated in your dist folder, which you can then open in your browser.

2. Fine-Tuning the Prefetching

Not only does Vue CLI handle code splitting, it also automatically injects these bundles as resource hints to your HTML’s <head> with <link rel="prefetch" src="bundle.js">. This enables browsers to download the files while the browser is idling, making navigating to different routes snappier.
While this may be a good thing, in larger apps there might be many routes that aren’t meant for the average user. Prefetching these routes will consume unnecessary bandwidth. You can disable the prefetch plugin in vue.config.js:

module.exports = {
  chainWebpack: config => {

And manually choose the prefetchable bundles with webpack’s inline comments:

import(/* webpackPrefetch: true */ './views/Profile.vue')

3. Use Sass Variables Everywhere

Vue’s scoped styles, Sass and BEM are helpful tools for keeping your CSS nice and tidy. You probably would still like to use some global Sass variables and mixins inside your components, preferably without importing them separately every time.
Instead of writing something like this in every component:

<style lang="scss" scoped>
@import '@/styles/variables.scss';

You can add this in vue.config.js:

module.exports = {
  css: {
    loaderOptions: {
      sass: {
        data: `@import 'src/styles/variables.scss;'`

4. Test Coverage with Jest

Vue CLI comes (optionally) with Jest all configured and with Vue Test Utils writing unit tests for your components is a breeze. The CLI Service supports all Jest’s command line options as well. The nice thing about Jest is its built-in test coverage report generator.
To generate a report, again for convenience, you can add another script to your package.json:

"test:coverage""vue-cli-service test:unit --coverage"

Now run it with npm run test:coverage. Not only does it show a report in your terminal, but an HTML-report will also be created in the coverage-folder of your project. You might want to add this to your .gitignore.
Using collectCoverageFrom in your Jest’s config, you can make the coverage also include files that don’t have tests yet, helping you identify and increase the coverage where it’s needed:

collectCoverageFrom: ['src/**/*.{js,vue}']

5. Modern Build for Modern Browsers

Most of us probably still need to take care of users with older browsers. Luckily Vue CLI supports a browserslist config to specify the browsers you are targeting. That configuration is used together with Babel and Autoprefixer to automatically provide the needed JavaScript features and CSS vendor prefixes.
With a single extra --modern argument, you can build two versions of your app; one for modern browsers with modern JavaScript and unprefixed code, and one for older browsers. The best part is, no extra deployment is needed. Behind the scenes, Vue CLI builds your app utilizing new attributes of the <script> tag. Modern browsers will download files defined with <script type="module"> and older browsers will fallback to JavaScript defined with <script nomodule>.
The final addition to your package.json:

"build:modern""vue-cli-service build --modern"

Providing modern browsers with modern code will most likely improve your app’s performance and size.


Tuomo Raitila

Tuomo is a software designer primarily specializing in the user-facing side of web development and design, crafting visually appealing, modern, simple and clean UI’s and websites, whilst always keeping the focus on user experience.

Piditkö lukemastasi? Jaa se myös muille.

Goforella kannustetaan työntekijöitä jatkuvaan itsensä kehittämiseen. Diplomi-insinöörin opintoni loppuivat vajaa vuosi sitten, joten käytöstäni poistui luonteva väylä kehittymiseen. Opintoni loppuvaiheet keskittyivät pilvipalveluihin ja niiden käyttöön ohjelmistoprojekteissa. Opintojeni kautta olen tutustunut muun muassa Herokuun ja Amazonin AWS-palveluihin. Lisäksi olen kirjoittanut Afterwork-alert -sovelluksesta ja DI-työn yhteydessä muutin sen toimimaan AWS Lambdan avulla. Miten voisin jatkaa hyvin alkanutta kehittymistäni tällä saralla? Ja mitä itsensä kehittäminen Goforella oikein tarkoittaa ja mitkä ovat kokemukseni siitä ?

1% – Korkeakouluopinnot

Nyt olen valmistunut diplomi-insinööri. Tyhjä olo valtaa. Mitä nyt? Olenko nyt täysin valmis työelämään ja paahdan töitä päivästä toiseen tästä eläkeikään asti?
Onneksi maailmani ei ole tällainen. Olen ymmärtänyt työurani aikana, että työelämässä täytyy kehittyä jatkuvasti. Korkeakoulututkinto on vain pohja, jonka päälle minun on hyvä rakentaa. Ympärille katsoessani näen, että he jotka ovat ymmärtäneet tämän, nauttivat työstään ja löytävät jatkuvasti merkitystä tekemisistään. He kehittyvät ja pysyvät aallon harjalla. Siihen joukkoon minäkin haluan kuulua.

2% – Kokemus

Parhaassa tapauksessa työelämä itsessään opettaa minulle jatkuvasti uutta. Sitä kutsutaan yleensä kokemukseksi. Kokemus sisältää myös paljon toistoa ja tietoutta siitä, miten asiat useimmiten toimivat. Kokemus on erittäin tärkeä osa kehittymistäni entistä paremmaksi ohjelmistoalan ammattilaiseksi.
Olen vuosi sitten päässyt mukaan projektiin, jossa hyödynnetään AWS-palveluita hyvin laajasti. Olen päässyt tekemisiin monien eri AWS-palveluiden (EC2, S3, Cloudformation, SNS ja SQS) kanssa. Sen lisäksi Ansible ja Docker ovat tulleet hyvänä uutena kokemuksena itselleni. Olin toki aiemminkin näiden parissa tehnyt jotain pieniä juttuja, mutta ei mitään tässä mittakaavassa.
Tämä kokemus on ollut minulle arvokasta. Vaikka opintojenikin aikana perehdyin pilvipalveluihin ja erilaisiin pilvialuistoihin, eivät opintojen puitteissa opitut asiat todellakaan vieneet minua vielä tälle tasolle.
Mutta nämäkin kokemukset ovat hyvin hallitsematonta uuden oppimista. Riittääkö se enää nykyisessä työelämässä? Täytyisikö pyrkiä hallitsemaan oppimisen suuntaa entistä paremmin, eikä vain laajuutta?

3% – Innostunut oma-aloitteisuus

Opintojeni loppumisesta on onneksi niin lyhyt aika, että koen edelleen luontevaksi käyttää vapaa-aikaani opiskeluun. Luen artikkeleita, tutustun uusiin teknologioihin ja innostun aika-ajoin toteuttamaan harrasteprojekteina jotain lähipiirini sovellustarpeita. Tällä tavoin voin hallitusti opiskella asioita, jotka koen sen arvoiseksi.
Eräässä työprojektissa olen päässyt tutustumaan Spring Boot- ja Camel -ohjelmistokehyksiin. Samaan aikaan harrasteprojektissani on hyvin samankaltaisia tarpeita kuin tuossa työprojektissa. Tarve on käytännössä lukea tiedosto, analysoida tietoa, lähettää dataa toisaalle ja kirjoittaa tiedosto lopuksi vielä kolmanteen paikkaan hieman täydennettynä. Harrasteprojektissani Camelin sijaan otin hieman erilaisen näkökulman ja käytin Spring-ohjelmistokehyksen Integration-kirjastoa. Näin pääsin oppimaan uuden ohjelmistokirjaston käyttöä, mutta samankaltaisessa käyttötarkoituksessa kuin aiemmin töissäni olin tehnyt toisella ohjelmistokirjastolla.
Uusien asioiden opiskelu hallitusti kannattaa. Nyt sain pintapuolisen käsityksen Spring Integrationsin käytännön eroista Cameliin. Kuitenkin tässä tilanteessa tutustuin vain sellaisiin teknologian yksityiskohtiin, joita satuin kyseisessä sovelluksessa tarvitsemaan. Miten voisin oppia myös sellaisista yksityiskohdista, joita en osaa itse etsiä?

4% – Suunnattu kehittyminen

Onneksi on muitakin keinoja, joita voin hyödyntää osaamiseni kehittämisessä. Työprojektissani olen huomannut kuinka valtavan laaja kirjo AWS:llä on erilaisia pilvipalveluita. Toisaalta tiedän, että Gofore arvostaa sertifioitumista, koska se on hyvä tapa osoittaa asiantuntijoiden osaamisen perustaso.
Ostin Udemystä kaksi verkkokurssia, joiden opeilla pitäisi pystyä suorittamaan AWS Architect Associate- ja Developer Associate -sertifikaatit. Kurssimateriaalit ovat A Cloud Gurun käsialaa. Architect-kurssin videomateriaali on minulla kohta käyty loppuun, ja pakko myöntää, A Cloud Guru tekee erittäin tasokasta opiskelumateriaalia. Olen oppinut monia yksityiskohtia eri AWS-palveluista, joihin en varmasti oma-aloitteisesti olisi törmännyt, saati välttämättä työprojekteissa olisi päässyt tutustumaan vielä vuosiin. Esimerkiksi Amazon Polly muuttaa tekstiä hyvin aidon oloiseksi puheeksi. Erittäin mielenkiintoinen palvelu ja mielessäni pyörii monia erilaisia käyttötarkoituksia tuolle tulevaisuudessa.
Nämä opinnot tähtäävät tietoisesti sertifikaattiin, joka parantaa asemaani tulevaisuudessa, kun esimerkiksi teen valintoja uusista projekteista, joihin haluaisin osallistua. Jonkun toisen suunnittelemalla opiskelupaketilla voin oppia asioista, joista en ole kuullutkaan. Eli hallittua kehittymistä tiettyyn suuntaan.

5% – Jatkuvaa kehittymistä

Yllä olevat tilanteet kuvaavat erilaisia tapoja oppia. Kaikki ovat tärkeitä kehittymisen kannalta. Korkeakouluopinnot, oma-aloitteinen asioihin tutustuminen, suunniteltujen kokonaisuuksien opiskelu ja tietenkin kokemus.

Näitä erilaisia osa-alueita yhdistämällä voin kehittyä jatkossakin erittäin voimakkaasti. Hieman liioitellen voisi sanoa, että kun kymmenen vuoden päästä kaikki mitä nyt osaan on jo vanhenutunutta, silti osaamiseni on ajan tasalla, koska pyrin kehittymään monilla tavoin eteenpäin.

6% – Gofore

Miten nämä väliotsikoiden prosentit liittyvät pohdintoihini?
Gofore ohjeistaa työntekijöitä käyttämään 6% työajasta itsenäiseen osaamisen kehittämiseen. Eli kaksi tuntia ja 15 minuuttia viikossa. Tuota työaikaa voi käyttää käytännössä mihin vain sellaiseen, jonka kokee olevan osaamisensa kehittämistä. Ja tietenkin työntekijä voi itsenäisesti päättää itselleen sopivan tavan, silloin tällöin koko päivä vai joka viikko pari tuntia.

Mihin tuota 6% voi käyttää, ihan käytännössä?

  • Ollessani vielä opiskelija, kykenin muun muassa kandi- ja diplomitöideni yhteydessä hyödyntämään myös työaikaa. Kun tein jotain sekä suoraan töihin että opintoihin liittyviä ohjelmistodemoja, tai luin esimerkiksi AWS-dokumentaatioita, kirjasin käytetyn ajan osittain tuohon osaamisen kehittämisen työaikaani.
  • Käytän aikaa kotona artikkelien lukemiseen ja harrasteprojektien koodaamiseen.
  • Viime aikoina isompana kokonaisuutena olen katsonut AWS-sertifioitumiseen tähtäävän verkkokurssin videomateriaalia bussimatkoilla, sekä töiden jälkeen että kotona viikonloppuisin.
  • Goforella järjestetään sisäisiä tietoiskuja säännöllisesti ja lisäksi voin osallistua muiden tahojen järjestämiin meetuppeihin ja konferensseihin.

Gofore luottaa jokaisen työntekijän osaavan itse määritellä, mikä on sopivaa osaamisen kehittämisen työaikaa. Tärkeintä on, että jokainen työntekijä ymmärtää itsensä kehittämisen olevan erittäin tärkeää niin Goforelle yrityksenä kuin yksilölle itselleen.
Verkon kuvalähteet:


Oskari Ruutiainen

Oskari Ruutiainen

Toimin Goforella ohjelmistosuunnittelijana useissa asiakasprojekteissa monipuolisissa rooleissa, välillä yksin konsultoiden ja välillä toimien osana isompia kehitystiimejä. Asiakasprojektien ohella olen mukana kehittämässä Goforen opiskelijayhteistyötä, työkulttuuria ja työviihtyvyyttä

Piditkö lukemastasi? Jaa se myös muille.

Devil in the detail
It takes tremendous effort to design something that feels effortless and pure. But does your team have what it takes to get rid of the extra weight to get there?

It works – so why isn’t it selling?

Many solutions that we use on a daily basis are examples of good design. The ergonomics of your coffee pot, the non-slip coating of your toothbrush. The social media app that you browse on your phone, and the bot sending you a reminder about that thing. Our every day is surrounded by good design, but we’re rarely conscious of it. We’ve become accustomed to it, and we know to expect it. Good design has become a commodity.
The thing with design is that when it’s good, it becomes invisible. A good design gets the job done but that alone doesn’t get you ahead of the game; the leading manufacturer of rubber boots doesn’t compete with the fact that they keep their customers’ feet dry.
When you’re browsing for flights for your next vacation, you’re thinking about your previous experiences. The check-in. How the staff greeted you. The taste of the coffee. The leg room. The WiFi. You’re not buying the solution to get from A to B, but the experience as a whole. You don’t buy the thing you need, you buy the thing you want.
If you want your product to stand out, you’ll have to design for the experience. Thie key moments that make up the whole story.

It’s as if they’re not even trying

When watching the Olympic games, you see athletes reach 8 meters in the long jump, throw a javelin for 85 meters and sprint 100 meters in under ten seconds. These people compete at the peak of human performance, but you can’t help but wonder how easy it looks for them. It seems to come to them so naturally as if they’re not even trying that hard.
But the athletes know that being the best means stripping the task all the way down to its molecular level and cutting out everything that doesn’t contribute to the goal. Only then can they start enhancing the tiniest details that can improve their performance. It takes everything to put all of the focus and energy into only those few moves, but it’s only those few moves make up the whole performance and bring them to the top. But to the viewer in front of the TV, there’s nothing but the few seconds that look easy.
The same effect is in action with design: the better the experience or the end-result, the less visible the effort behind it. The best design always appears as if there’s no effort to it at all. But that’s an illusion, too. In reality,  ”effortless” takes the most effort.
Google Search is an excellent example of this. So much is happening under the hood, but it only takes one input field and a button to deliver everything.
Google search

Effortless demands the most

Ideas might come easy, but no excellent design is ever created on a whim. It’s never luck. I’d say 80% of a designer’s work effort will never be directly seen, touched or heard by the user who buys or uses the finished product or service. That effort goes to identifying, questioning, explaining and deciding what the remaining 20% needs to focus on: finding out the right why to design for and getting to know the people who to design for, before even starting to think about the what.
Every thought, desire, and motivation researched, mapped, and prepared for. Every decision, contact, emotion, and action anticipated and accounted for. All with the aim to orchestrate a certain kind of experience that leaves a personal, emotional imprint. It’s this rigorous work done in the background that makes the final design feel effortless, sophisticated and pure. Because, as the user or the customer, you’re experiencing only those few key things that you’re meant to.

Your product needs a weight loss program

In 1955, Dieter Rams wrote his well-known ten commandments of good design. One of these commandments goes “good design is as little design as possible”. Times have changed, but the principle has never been more relevant. Today, we have so much that we want our users and customers to see and interact with that it’s never difficult to fill a screen with information, interactive elements, and a bunch of features. It’s an embarrassment of riches. You’re going to need to leave stuff out to make it better.
But I’m not talking about having fewer buttons in the user interface or using a more limited colour palette – I’m talking about checking the pulse before giving your thumbs-up for another year of life support.
What if the real problem has nothing to do with the user interface looking outdated?
What if the feature you’ve been working on so hard adds nothing to the experience?
What if instead of giving it a facelift you’d get rid of the whole thing?

Wait, why are we doing this again?

The question of why is a powerful tool. It forces people out of autopilot mode and dares them to look at the big picture again. Make the why into a habit in the project, and it becomes a knife that starts separating the fat from the meat. You start to see more of the core. The things that matter and have real value.
But the deeper you carve with the why, the harder the decisions become.
Making the decisions to focus only on those few things that matter the most, and doing just those superbly – that’s the hardest job there is. Bubbles will burst, and a lot of people will get uncomfortable. There will be compromises and disagreements. But being the best at one thing means not doing the other thing at all.
In design, the devil isn’t in the detail, but the things that aren’t there. It’s in the choices that aren’t offered, and the white space that doesn’t make people think. In the end, it’s often the absence of the things that defines the design.
The next time you face an excellent design, think of the things that aren’t there. Only then can you truly appreciate the things that are.


Janne Palovuori

Janne designs services that bring genuine value to people and business. As a service designer, he observes services through a holistic focusing lens. Heavily on understanding the 'right why' to design for, Janne works for people, with people, meaning the bedrock of his line of work is qualitative research, facilitation and co-creative methods. Janne's superpower is creating visualizations that conceptualize ideas and make information comprehensible, engaging and worthwhile to target audiences, across different project phases.

Piditkö lukemastasi? Jaa se myös muille.

Gofore Project Radar 2018 Summary

By now, it has become an annual tradition at Gofore to conduct a Project Radar survey at some point of the year to gain better insight into our presently running software projects. The 2018 Gofore Project Radar builds on two previous Project Radar iterations, conducted in fall 2016 (in Finnish only) and spring 2017, containing a set of questions relating to currently employed tech stacks, development practices and projected (or hoped-for) technological changes. Most of the questions from last year’s Project Radar made their way into this year’s Project Radar to allow for year-on-year variation detection. We also added some new questions that were considered important enough to warrant their inclusion in the survey.
So with the 2018 Project Radar results in, what can we tell about our projects’ technological landscape? What can we say has changed in our technological preferences and project realities over the past year?

The Gofore Project Radar survey results for 2018 are in! [Click on the image to enlarge it].

End of JavaScript framework fatigue?

Over the past few years, the frontend development scene has shown intermittent signs of ”framework fatigue” as a steady stream of new frameworks, libraries and tools has flooded the scene, challenging developers to work hard to keep pace with the latest developments, current community preferences and best practices. A look at our Project Radar data tells us that at Gofore there has been no significant churn when it comes to primary frontend technologies employed by individual projects. Instead, the results indicate a distinct consolidation around React, Angular and Vue.js, the three major contenders in the JS framework race. All these three have gained ground on older frontend techs (AngularJS, jQuery etc.) and ratcheted up their project adoption percentage, React being the top dog at a near-50% adoption rate among projects represented in the survey. If given a chance to completely rewrite their project’s frontend, most respondents would, however, pick Vue.js for the job.
The fact that there was no major change from last year in preferred frontend frameworks is perfectly in line with developments (or lack thereof) on the frontend scene over the past year. While last year saw major releases of both React and Angular roll out (with Vue.js 3.0 still somewhere on the horizon), there were no new frameworks to come along that would have been able to upset the status quo and catch on big time in the community (regardless of distinct upticks of interest in at least Svelte.js and Preact). This stability comes in stark contrast to the unsettled years in the not-too-distant past when the balance of power between different JS frameworks was constantly shifting as new frameworks and libraries appeared on the scene.
Looking beyond the battle of JS frameworks, a significant trend with regard to frontend development is the ever-increasing share of single-page applications among our projects’ frontends. Around 64% of this year’s Project Radar respondents reported to be working with single-page applications, up from 57% in last year’s Project Radar results.

Node.js on the rise

Moving our focus to the backend, where Java has traditionally held a predominant position among our projects, a somewhat different trend emerges. While the Project Radar data clearly brought out a tendency toward consolidation around the three major frontend frameworks, the picture on the backend side, on the other hand, looks a little more fragmented. Last year’s Gofore Project Radar pegged Java usage at nearly 50% among all projects represented in the survey, trailed by Node.js and C# each with a 15% share of the cake. While Java still came out on top this year, it was reported as the primary backend language in only 32% of the projects, down a whopping 15 points from last year’s results.
This drop was fully matched by an upward surge by Node.js, which more than doubled its share of the overall pie, up 17 points from last year. While C# stood its ground at close to 15%, a crop of new languages, missing from previous years’ results, entered the fray in the from of Kotlin, Clojure and TypeScript. Regardless of there being only a handful of projects where they were reported as being primary backend languages, they contributed to the growing share of minority languages in our backend landscape, a group previously comprised of Scala, Python, Ruby and PHP.
Similarly to how respondents were asked to choose their hoped-for replacement tech for their frontends, we also asked our developers what was their preferred language for rewriting their backends if given the chance to do so. Last year most respondents would take the cautious approach and stick with their previously established backend languages. This year, however, there was considerable interest in rewriting backends in Kotlin, particularly among respondents who reported Java as their primary backend language (55% of all respondents were eager to switch to Kotlin from some other language).
Before drawing any conclusions from these statistics, it should be noted that upwards of 55% of respondents reported to be working with a microservices-type backend stack, suggesting that potentially multiple languages and server-side frameworks might be used within a single project. Still, the appeal of Kotlin, particularly among Java developers, is clearly apparent, as is the shift toward Node.js being the centerpiece of most of our backend stacks.
While the Project Radar does not shed any light on the reasons behind any technological decisions, the increasing popularity of Node.js can probably be put down to the above-mentioned prevalence of microservices-esque backend setups, where Node.js often slots in to serve as an API gateway fronting other services, which, in turn, might be written in other languages. Another contributing factor might be the emergence of universal JavaScript applications, where the initial render is handled by running JavaScript on the backend.
The popularity of Kotlin, on the other hand, has been picking up ever since Google enshrined it as a fully supported language for Android development. Considering its status as one of the fastest-growing programming languages in the world, its increasing presence in server environments is hardly surprising.

Going serverless

Now where do we run our project infrastructure in the year 2018? According to last year’s Project Radar results, more than two thirds (68%) of all respondents were still running their production code in a data center that was managed either by the client or a third party. This year, that number had come down to 59%. While this isn’t particularly surprising, what is mildly surprising, though, is the fact that IaaS-type infrastructure saw an even greater decline in utilization. Only 47% of all respondents reported to be running their production code in an IaaS (Infrastructure as a Service) environment, as opposed to 60% last year.
As the utilization of both traditional data center environments and IaaS services fell off, PaaS (Platform as a Service) and, especially, serverless (or FaaS, Function as a Service) platforms were reported to take up a fair portion of the overall share of production environments. While still in the minority, PaaS services were reported to be used by 12% of all respondents, tripling their share of 4% from last year, and serverless platforms by 16.5% of all respondents (no reported usage last year as there was no dedicated response option for it).
As our projects’ production code is more and more removed from the actual hardware running it, containerization has also become more commonplace, as evidenced by the fact that Docker is now being used by 76% of all respondents (up from 43% last year). Despite Docker’s increasing adoption rate, there wasn’t much reported use for the most popular heavy-duty container orchestration platforms: Kubernetes, Docker Swarm, Amazon Elastic Container Service and OpenShift Container Platform were only reported to be used by 14% of all respondents.
Since running our code in cloud environments enables shorter deployment intervals, one could think we’d be spending more time flipping that CI switch that kicks off production deployment. And to some extent, we do: we have fewer projects where production deployments occur only once a month or less often (10% as opposed to 20% last year), but, somewhat surprisingly, fewer projects where production deployments are done on a daily basis (10.5% vs 12% last year).

Miscellaneous findings

  • Key-value databases doubled their reported project adoption (32% vs 16.5% last year)
  • Jenkins was the most prevalent CI platform among represented projects, with a 57% adoption rate (its closest competitor, Visual Studio Team Services/Azure DevOps well behind at 17%)
  • Close to nine percent of all respondents reported to be using a headless CMS (Content Management System)
  • Ansible was being used by 71% of respondents who reported using some configuration management (CM) tool, clear ahead of any other CM tools (Chef was being used by a little shy of eight percent of CM tool users, while Puppet had no reported users)
  • Development team sizes were smaller than last year (57% of dev teams had five or more team members last year, whereas this year such team sizes were reported by 52% of respondents)
  • The reported number of multi-vendor teams was smaller than last year (41% vs 47% last year)
  • Most respondents reported to be working on a project that had been running 1-3 years at the time of responding
  • Most project codebases clock in at 10k – 100k in terms of LOC (lines of code)
  • Scrum was the most favored project methodology, being employed by nearly 51% of all represented projects. Kanban, on the other hand, saw the most growth of any methodology (22% vs 12% last year)

Some closing thoughts

Once again, the annual Project Radar has told us a great deal about our preferred programming languages, frameworks, tooling and various other aspects of software development at Gofore. While the survey is by no means perfect – and I can easily come up with heaps of improvement ideas for the next iteration – the breakdown of its results enables us to more easily pick up technological trends in our ever-increasing multitude of projects. This year’s key takeaways are mostly reflections of industry trends at large, but there are some curiosities that would be hard, if not impossible, to spot if not for the Project Radar. The usefulness of these findings is debatable, as some of them fall under trivia, but still they come as close to a ”look in the mirror”, technology-wise, as one can get at a rapidly growing company of this size.

Henri Heiskanen

Henri Heiskanen

Henri is a software architect specializing primarily in modern web technologies, JavaScript/Node.js & JVM ecosystems and automated infrastructure management. A stickler for clean code and enforcement of best practices in project settings, Henri is uncompromising in delivering well-tested, high-quality code across the stack.

Piditkö lukemastasi? Jaa se myös muille.

Gofore Bots
I wrote a blog post last year about how bots are used to automate routine work in our company (Gofore). The same topic is even more relevant today when we are stepping into an era of AI. Let’s see what has happened to our bots since my last blog.

30 little bots

Today we have around 30 active bots that integrate to Slack. Almost half of these slackbots are focused on utilisation and billing functions. Reliable utilisation and billing are a consulting company’s engine oil that enables all other activities. These bots control peoples’ marking hours, calibrate utilisation capacities, remind to bill customers and recognise human errors. Utilisation and billing were also the first automated functions.
The other significant group is reporting slackbots. All companies have a lot of business-critical information that needs to be made visible to employees. Slackbots list, for example, customer statistics, site-based information and the most impacted blog and social media posts. These slackbots also can be used on-demand.
The third group of slackbots is everything else. We have an overtime bot, SLA-observer bot and bots for the sales team. One slackbot updates users’ vacation statuses and the other connects people for a beer.

In God we trust, others bring bots 

Basically, a bot is a piece of software that performs automated tasks.  Despite this, bots have advantages that many other applications are missing. I have listed the three most important ones.
A Slackbots’ best asset is simplicity because a bots’ user interface is mostly text and icons. In the same way, interaction with them is based on text and not graphical forms or other UI elements. Some bots are totally invisible for users and just run in the background.
A Slackbots’ simple user interface helps to focus on essentials. There is no need to spend time with responsive design challenges or debugging the newest JavaScript framework defects. Product planning can be targeted on feature impacts and user needs validation.
The second advantage is a bots’ overall popularity. Many users have used bots previously, hence a bot’s behaviour is well-known. For this reason, intensive training and user guides can be avoided. Bots’ messages are displayed in different slack channels continuously, so promotion also happens naturally.
The third advantage is the Slack platform. Slack provides a smooth user experience, out-of-the-box services (security, authentication, performance, data storage etc.), wide device support and excellent integration options. Although all our bots are handmade, Slack has speeded up our development enormously.

Value for life

The value proposition is the reason that the product exists; this can be summarised in three points in our case. Better job satisfaction means bots take care of boring and repetitive tasks and let people work on meaningful and interesting duties. The cost saving aspect focuses on time-saving and error sensitive functions. Practically, our bots have replaced a big part of middle management tasks already. Improved decision-making means that business-critical data is visible for everybody 24/7. Every new bot idea is validated and prioritised against these three factors.
Some months ago, our bot team created an internal survey regarding how people feel about our slackbots. The results were very promising – 95% of people think that the bots are useful and 30% of people think that the bots are vital to the company. This feedback gave an extra boost and motivation to the whole team to continue development work.

Work in progress

My estimate is that our company still have around 20-30 manual processes that can be easily automated by bots. Parts of the recruiting process, subcontractor management, credit card administration and device handling, just to name a few. After this low hanging fruit has been picked, it’s time to add more AI to bots.
The outcome of many internal projects is mediocre. In contrast, bots bring value to our company every single day. When it has been said more than once that these bots are actually part of our company’s competitive advantages, you know that product development has reached a goal.
Juhana Huotarinen – the proud Product Owner of the Gofore Bot Team
Graphic design
Ville Takala


Juhana Huotarinen

Juhana on kokenut ohjelmistoprojektien vetäjä, joka on erikoistunut Lean-ajattelun ja ketterien menetelmien käyttöönottoon suurissa julkisen sektorin tietojärjestelmähankkeissa. Viime vuosina hänet on pitänyt kiireisenä mm. Trafi, Valtori (Valtiokonttori), Opetushallitus, Kela ja Liikennevirasto. Aiemmin työurallaan Juhana on toiminut myös projektipäällikkönä ja ohjelmistosuunnittelijana. Juhanan ajatuksia voi lukea lisää hänen asiantuntijablogeistaan sekä Twitteristä.

Piditkö lukemastasi? Jaa se myös muille.

In my last blog post I shared my ideas about some nice features our meeting room system should have – one was measuring air quality in meeting rooms. Soon after publishing the blog post, I got a call from Mika Flinck from Digita who offered a helping hand to develop this feature. After the call, Digita sent two Elsys ERS-CO2-sensors, which work on Digita’s Long Range Wide Area Network (LoRaWAN), for us to use for developing and testing purposes. The sensors can measure a room’s temperature, moisture, level of lightness and carbon dioxide (CO2).
Elsys ERS-CO2-sensor

One of the Elsys ERS-CO2-sensors in Tampere.

LoRaWAN is a wireless Low Power, Wide Area Network (LPWAN) networking protocol which is administrated by the LoRa Alliance association. IoT-devices, which work in LoRaWAN, can have batteries that last up to 10 years due to the of low powered technology and typically devices send messages to the network infrequently, like every 15 minutes.
Architecture of the current solution
Architecture of the current solution.
In Digita’s LoRaWAN all messages and commands are handled via Actility Thingpark which works as a gateway between LoRaWAN and the Internet. In our case, Actility Thingpark will resend all messages in the JSON-format from LoRaWAN to Amazon Web Services’ (AWS) API Gateway. After that, the API Gateway sends messages to Lambda which decodes the Elsys payload and the decoded information is finally sent to our meeting room system in EC2. All client systems can get updated information from the server.

What is good room air quality?

For the meeting room system, I used several sources for gathering ideal values of good air quality. I preferred using information from the Finnish Institute of Occupational Health (FIOH) and The Organisation for Respiratory Health, which contained recommendations for air temperature and moisture according to seasonal and weather conditions. Also, working conditions give some frames for good room air quality. I used the following values for our meeting room system.

Moisture (%)
Carbon dioxide (PPM)
Temperature (°C)
Good 25 – 45 < 800 20 – 23
Bad 0 – 25 or
45 – 70
800 – 1150 19 – 20 or
23 – 25
Very bad > 70 > 1150 < 19 or > 25

Limits are averaged from several sources and our daily work is in an office environment. Now the meeting room tablets can visualize the level of each metric using different colours. In the future, we will develop a feature in which all the limits are drawn on the timeline graphs and visualize any points exceeding these limits.

A timeline graph

A timeline graph from the meeting room in Jyväskylä. The graph can be zoomed and panned. Users can hover on the graph to get detailed values.

A graph from the current solution
The upper right corner shows the latest information of air quality on the tablet view.

Last thoughts

LoRaWAN-sensors are very easy to handle, just configure and forget. In the ideal situation, you must change the battery in the sensor after a few years and nothing else needs to be done. Now the meeting room systems have configuration support where we can determine in which room the sensor is located. When a sensor is moved to a new location, we just link the sensor to the new room.
For measuring air quality, I have a vision of bringing peoples’ subjective opinions which will be combined with sensor data. This will make us smarter in terms of what is good air quality, especially when taking into account how many people were in the room. Maybe someday our Seppo-bot can ask a few simple questions after the meeting.
Big thanks for Mika Flinck from Digita for lending us the LoRaWAN-sensors for developing and testing purposes! This was a great opportunity to learn about LoRaWAN and develop our meeting room system further.

Jarkko Koistinaho

Jarkko Koistinaho

Jarkko toimii Goforella teknisenä projektipäällikkönä ja hän on laatu- ja testausorientoitunut ohjelmistoalan ammattilainen. Testauksen lisäksi hän voi astua kehittäjän saappaisiin, Scrum Masteriksi tai tehdä järjestelmäasiantuntijalle tyypillisiä tehtäviä. Mallipohjainen testaus ja suorituskykytestaus ovat Jarkon erityisosaamisalueita.

Piditkö lukemastasi? Jaa se myös muille.