The GraphQL Finland 2018 conference was held recently (18-19.10.2018) at Paasitorni and was the first event of its kind in Finland. The conference brought a day of workshops and a day of talks around GraphQL. It was organized by the same people as React Finland as the good organisation showed. The talks were interesting, the venue was appropriate, food delicious, the atmosphere was cosy and the after party was awesome. Gofore was one of the gold sponsors and organized the afterparty at Kamppi.
All of the talks were live streamed and they are available on Youtube. I was lucky to get a ticket to the event and be able to enjoy the talks live. Overall, most of the talks were easy to comprehend although I only had a little experience with GraphQL through experiments and what I had learnt a couple of months ago at the React Finland 2018 conference.
“GraphQL is an open source data query and manipulation language, and a runtime for fulfilling queries with existing data. It was developed internally by Facebook in 2012 before being publicly released in 2015. It provides a more efficient, powerful and flexible alternative to REST and ad-hoc web service architectures. It allows clients to define the structure of the data required, and exactly the same structure of the data is returned from the server, therefore preventing excessively large amounts of data from being returned. – Wikipedia“
(Life is hard, learning GraphQL easy)
Notes from the conference
The talks at GraphQL Finland were quite fast paced and more like lightning talks compared to the React Finland event: it was quite tough to digest all the new information. Fortunately, the talks were recorded so you can concentrate on interesting and relevant topics and get back to others later. Also, the sponsor’s lounge by Gofore and Digia provided a nice relaxing space to get your thoughts together. I have to say, Digia’s Star Wars Pinball machine was quite fun
The talks covered different aspects of GraphQL and surrounding topics in details. Here’s my notes from the talks which I found most interesting and watched live at the event.
(Goforeans in the sponsor lounge)
(Goforeans challenging attendees to foosball)
The event started with Adam Miskiewicz’s story from Airbnb and incrementally adopting GraphQL. It’s simple to start using GraphQL in your project but adding it incrementally and carefully in huge codebases powering large distributed systems is not quite as straightforward. The talk dived into how Airbnb is tackling this challenge, what they’ve learned so far, and how they plan to continue evolving their GraphQL infrastructure in the future. Towards GraphQL Native!
#GraphQLFinland started with @skevy talking about how @AirbnbEng incrementally adopted #GraphQL in large codebase. Why, when, how, where? Enables evolving API in new ways. Still iterating, towards GraphQL Native. pic.twitter.com/DiSKy4gmmY
— Marko Wallin (@walokra) October 19, 2018
Kadi Kraman from Formidable Labs talked about going offline first with GraphQL. She did a nice interactive demo with React Native and Apollo 2. Users expect your mobile app to work offline and the tooling in GraphQL makes it reasonably straightforward to get your React Native app working offline. Slides
“Do this as you go and offline comes almost as a side-effect”
“Do this as you go and offline comes almost as a side-effect” – Going offline first with #GraphQL by @kadikraman at @GraphQLFinland. Nice interactive demo with #ReactNative and #Apollo2. Slides: https://t.co/T5neq5Zxln. #graphqlfinland pic.twitter.com/t5v5Bx2g9s
— Marko Wallin (@walokra) October 19, 2018
Life is hard, without documentation. Carolyn Stransky presented her story of ups and downs when learning GraphQL and documentation’s role in it. The problem with GraphQL is that – because there’s no “vanilla” GraphQL – there’s no central hub for all of the information and tooling necessary to learn it. It’s under-utilised and scattered throughout our community. The talk touched on how to better enable GraphQL docs for learning and comprehension and the slides pointed to good resources.
Documentation is one pain point of learning new things. @carolstran talked how we can better enable #GraphQL docs for learning and comprehension at @GraphQLFinland. Slides pointed to good resources: https://t.co/kZ9qGwS72Z. #graphqlfinland pic.twitter.com/uyCaDoec8h
— Marko Wallin (@walokra) October 19, 2018
Benjie Gillam from PostGraphile taught how a database-centric approach to GraphQL API development can give your engineers more time to focus on the important parts of your application. Adhere to GraphQL best practices, embrace the power of PostgreSQL, and avoid common pitfalls. Interesting slides.
Databases and #GraphQL with PostGraphile 🤔 @Benjie talked about Database-first GraphQL development at @GraphQLFinland. Data-centric approach can give time to focus on important parts of your app. Interesting slides: https://t.co/LdITgG39kd. #graphqlfinland pic.twitter.com/KlS0CoTG34
— Marko Wallin (@walokra) October 19, 2018
Christoffer Niska gave some good tips for software development: Don’t over-abstract, test everything, use static type checking, follow best practices, don’t prematurely optimise.
Listening good general tips for software development from @Crisu83 at @GraphQLFinland stream while lounging at @GoforeGroup sponsor lounge. Don’t over-abstract, test everything, use static type checking, follow best practices, don’t prematurely optimize. #graphqlfinland pic.twitter.com/jrc07Uo8xs
— Marko Wallin (@walokra) October 19, 2018
The (Un)expected use of GraphQL talk by Helen Zhukova showed the benefit of a single code base on the client and server side. Partly live coded with i.a. CodeSandbox. The any DB, in this case, was MongoDB.
(Un)expected use of #GraphQL talk by @return_hz at @GraphQLFinland showed the benefit of single code base on client and server side. Partly live coded with i.a. @codesandboxapp. The any DB in this case was MongoDB. #graphqlfinland pic.twitter.com/kRMILt2oxu
— Marko Wallin (@walokra) October 19, 2018
The mysterious closing keynote was Dan Schafer talking about GraphQL history, present and future. “Strive for single sources of truth”. Still lots of things to do in the ecosystem.
Day full of #GraphQL information borbardment at @GraphQLFinland closed with a mysterious keynote by @dlschafer. The past, present and future of GraphQL. “Strive for single sources of truth”. Still lots of things to do in the ecosystem. #graphqlfinland pic.twitter.com/u9ypVRwOpl
— Marko Wallin (@walokra) October 19, 2018
The last chance to practice your Finnish was at the Afterparty 🎉 at the Gofore office!
“Someone said your afterparty was the best conference party ever :)”
Foosball was popular also at the afterparty.
Know thy Platform
If you have any hands-on experience developing a new software/hardware platform, you know how laborious a task it is. Developers, engineers, designer, usability experts and managers (amongst others) can spend gigantic hours on fine-tuning the subtle interplay of hardware components’ combined performance, UI interactions, animations, branding, usability and foremost – the platform.
Upon release, besides the huge marketing budget, documentation is one of the key aspects to make or break a new platform. See for example the iOS Human Interface Guidelines. Documentation quality has improved; this is a blessing of the 2010’s and the Internet, as the competition against other platforms is a make-or-break. Do not think that you are a platform expert after studying it though, as you might end up trying to come up with custom conventions / UI patterns that no user is familiar with. Trust the guidelines that the platform developers have provided for you. Do not re-invent the wheel.
Each platform comes with unique features; unique does not always mean great. It’s your job as a designer to harness the crème de la crème of each platform to suit your specific case. Understanding and unveiling the potential of a set of features each platform provides – that is the truest sign of any great designer.
Examples of platform features:
- Devices with large screens provide lots of area for multi-tasking or visually intense interactions
- Prefer NUI against GUI features as it is a more natural way of interaction for people (Read more: https://www.interaction-design.org/literature/article/natural-user-interfaces-what-are-they-and-how-do-you-design-user-interfaces-that-feel-natural)
- Physical knobs and switches
- Audio quality
- Voice control
I’m not going to explain how to get potential from each of these in detail now. Just understand them profoundly. Besides, platform features are an easy thing to spot – They are usually used in marketing materials / technical specifications of any product.
The fine-tuned staccato to make the platform reverberate needs to be harnessed in order to design the most functional and engaging experiences that take advantage of the platform in full capability.
Most platforms come with pre-installed applications – study them in detail and try to understand why certain decisions were made. If you are new to the platform, first try to design them in wireframes by guessing what the application does, only based on the icon of the app. Discussing with other bright minds can be helpful here. After making your own rough paper prototypes of the same apps – compare them with the apps provided.
Helpful resources for understanding the platform do exist most times you can find interviews, videos and blog posts online about those exact design decisions. That is just a part of how marketing works nowadays. The platform developers want to explain why they made specific decisions and what criteria were considered. Understand, though, that those 2-3 min videos may be an overview of a 12-month project. Go further and dig into the specific people interviewed, they usually have great Behance profiles, blogs, Twitter feeds or Github profiles.
Benchmarking is like listening, where you learn from other peoples’ knowhow. Stick with top rated applications for the platform, as their user value is confirmed by the users themselves. Also, big players are good at making great software. (AirBnB, Spotify, Facebook, Google, EA,…)
If the platform is brand new, consider similar platforms and their conventions – make your best guess on which of these to use in your specific case. Iterate a lot on the design by making rapid prototypes and test with real end-users. Build – Measure – Learn (- Pivot or Persevere).
Interacting with the platform
After you have familiarized yourself with the platform and maybe drawn some rough wireframes of your app, you can start planning the interactions themselves. To avoid cognitive load, you should keep to platform conventions and utilise users’ pre-existing knowhow. This means taking advantage of common human skills and reusing domains skills. The less you force the user to learn new things, the less annoyed she is.
It is not enough to know what looks good, but a well-documented platform also contains examples of dos and don’ts and explains when a specific UI pattern is relevant. See for example documentation of Material Design. It has plenty of case-by-case examples.
A well-documented platform will also provide alternatives to a specific UI component. Try learning the major components well and understand their most potential use cases – This makes it easier to pull out a component for a specific need.
Your customer might say: “Display a list of PS4 games and open each of them”. Basically, this should translate in your ears into: “I need a component that is able to display multiple elements in one view and allows the user to view a detailed view of each element”.
If you did your homework, finding a component like this from the platform pattern library should be rather easy.
Again, take advantage of all possible interaction possibilities of a platform, understand the context and go ahead – make that great interaction.
To be continued – This was part 1 in a series of upcoming posts on the topic: “Design Essentials: How to Prosper on Every Platform”.
Get ready for impact
The use of cloud services and web technologies means that it is no longer important where your physical base is located. Technology is changing every aspect of our lives and the ability for people working within both government and private organisations to work remotely is changing the way we do business (for the better). This sociological impact means that designers, developers and business professionals who work remotely can have a direct impact on both the environment around them and the wider world.
This decentralisation of knowledge and work has disrupted many industries and price is often the first thing to change. A coder based in a low-cost region such as Asia can undercut his peers in California or Western Europe. Real time working tools such as Jira boards, Trello and Slack mean that teams can work effectively and efficiently wherever team members are located.
Add culture to this globalised decentralisation and the mix becomes significantly richer. I have first-hand experience working with clients in some of the most multi-cultural cities in the world such as Amsterdam, London and Dubai. Multi-cultural teams look at challenges through different lenses. It is natural to have inbuilt bias even if you don’t realise yourself. Where you were brought up, where you went to college and your circle of friends all influence your outlook on life and how you approach a challenge. Bringing together diverse cultures encourages discussion, it helps to produce a richer all-inclusive solution. Digital products and services are used by humans and so a rich all-inclusive product or service is more likely to be successful.
Culture promotes creativity
It is often said that travel broadens the mind however people who travel but fail to engage with the local culture have less of a creative boost than those who immerse themselves in it.
Lee and Abs from Dubai Future Accelerators
There are many examples of companies who have failed in their attempt at doing business in a new territory because they haven’t recognised the cultural differences. If you want to succeed in developing a product or service internationally you need to consider many things including, language, religious beliefs and different ways of working in order to build a robust and mutually beneficial business strategy.
Understanding your audience
Body language, casual and business etiquette, transparency and respect are some of the first things that I personally consider when approaching new international markets. It is important to recognise the cultural differences and ways of working, but more importantly to respect these differences even if you do not yet understand them. Change impacts all of us, it’s just that some adapt to it easier than others. Whether one shares the same beliefs or not, if you are to achieve success in international business a mutual understanding is imperative. Relationship building blocks are a foundation for growth. It is my belief that strong international collaboration between diverse cultures can and will produce better products and services and allow people to achieve more.
Take every opportunity to learn, understand and appreciate as many different cultures as you can in your lifetime. You never know where the creativity can lead you. Disruptive innovation comes from all corners of the earth. Often a problem in the most challenging environment can create the most life-changing solution.
Dubai Future Accelerators (DFA)
In September 2018 Gofore were selected to participate on the DFA program. This program is designed to bring together companies from across the Globe to co-create products and services with the aim of helping the Dubai Government entities face challenges of making Dubai the City of the Future. Working with the DFA and various government authorities has highlighted the importance of appreciating and embracing cultural differences. I have been working with people from all walks of life from university students to senior government officials all of whom share a passion for improving peoples lives through digital. This phase of the DFA program draws to a close in late November – check out my next post to learn more about some of the exciting solutions that will be developed as a result of the program.
You can read more about the Dubai Future Accelerator program here: The Dubai Future Accelerator Program
What Is Vue CLI?
Vue CLI (version 3) is a system for rapid Vue.js development. It’s a smooth way to scaffold a Vue project structure and allows a zero-config quick start to coding and building. Vue CLI Service, that is the heart of every Vue CLI app, neatly abstracts away common front-end development tools such as Babel, webpack, Jest and ESLint, while still offering flexible ways for configurability and extensibility as your project grows.
Let’s go through a few tips that’ll help you get even more out of your Vue CLI App.
1. Code Splitting And Keeping Bundles Light
Code splitted routes are loaded only on-demand, so it can have a major benefit on the initial loading time of your app.
Vue CLI also comes with Webpack Bundle Analyzer. It offers a nice birds-eye view of the built app. You can visualize bundles, see their sizes and also the size of modules, components or libraries they consist of. This will come in handy when Vue CLI warns you about bundle sizes getting out of hand, giving you some hints where to trim down the fat.
Vue CLI Service provides an extra
--report argument for the
build command to generate the build report. Add this handy little snippet in your
npm run build:report you’ll get report.html generated in your dist folder, which you can then open in your browser.
2. Fine-Tuning the Prefetching
Not only does Vue CLI handle code splitting, it also automatically injects these bundles as resource hints to your HTML’s
<link rel="prefetch" src="bundle.js">. This enables browsers to download the files while the browser is idling, making navigating to different routes snappier.
While this may be a good thing, in larger apps there might be many routes that aren’t meant for the average user. Prefetching these routes will consume unnecessary bandwidth. You can disable the prefetch plugin in
And manually choose the prefetchable bundles with webpack’s inline comments:
3. Use Sass Variables Everywhere
Vue’s scoped styles, Sass and BEM are helpful tools for keeping your CSS nice and tidy. You probably would still like to use some global Sass variables and mixins inside your components, preferably without importing them separately every time.
Instead of writing something like this in every component:
You can add this in
4. Test Coverage with Jest
Vue CLI comes (optionally) with Jest all configured and with Vue Test Utils writing unit tests for your components is a breeze. The CLI Service supports all Jest’s command line options as well. The nice thing about Jest is its built-in test coverage report generator.
To generate a report, again for convenience, you can add another script to your package.json:
Now run it with
npm run test:coverage. Not only does it show a report in your terminal, but an HTML-report will also be created in the coverage-folder of your project. You might want to add this to your
collectCoverageFrom in your Jest’s config, you can make the coverage also include files that don’t have tests yet, helping you identify and increase the coverage where it’s needed:
5. Modern Build for Modern Browsers
Most of us probably still need to take care of users with older browsers. Luckily Vue CLI supports a
With a single extra
<script> tag. Modern browsers will download files defined with
The final addition to your package.json:
Providing modern browsers with modern code will most likely improve your app’s performance and size.
It takes tremendous effort to design something that feels effortless and pure. But does your team have what it takes to get rid of the extra weight to get there?
It works – so why isn’t it selling?
Many solutions that we use on a daily basis are examples of good design. The ergonomics of your coffee pot, the non-slip coating of your toothbrush. The social media app that you browse on your phone, and the bot sending you a reminder about that thing. Our every day is surrounded by good design, but we’re rarely conscious of it. We’ve become accustomed to it, and we know to expect it. Good design has become a commodity.
The thing with design is that when it’s good, it becomes invisible. A good design gets the job done but that alone doesn’t get you ahead of the game; the leading manufacturer of rubber boots doesn’t compete with the fact that they keep their customers’ feet dry.
When you’re browsing for flights for your next vacation, you’re thinking about your previous experiences. The check-in. How the staff greeted you. The taste of the coffee. The leg room. The WiFi. You’re not buying the solution to get from A to B, but the experience as a whole. You don’t buy the thing you need, you buy the thing you want.
If you want your product to stand out, you’ll have to design for the experience. Thie key moments that make up the whole story.
It’s as if they’re not even trying
When watching the Olympic games, you see athletes reach 8 meters in the long jump, throw a javelin for 85 meters and sprint 100 meters in under ten seconds. These people compete at the peak of human performance, but you can’t help but wonder how easy it looks for them. It seems to come to them so naturally as if they’re not even trying that hard.
But the athletes know that being the best means stripping the task all the way down to its molecular level and cutting out everything that doesn’t contribute to the goal. Only then can they start enhancing the tiniest details that can improve their performance. It takes everything to put all of the focus and energy into only those few moves, but it’s only those few moves make up the whole performance and bring them to the top. But to the viewer in front of the TV, there’s nothing but the few seconds that look easy.
The same effect is in action with design: the better the experience or the end-result, the less visible the effort behind it. The best design always appears as if there’s no effort to it at all. But that’s an illusion, too. In reality, “effortless” takes the most effort.
Google Search is an excellent example of this. So much is happening under the hood, but it only takes one input field and a button to deliver everything.
Effortless demands the most
Ideas might come easy, but no excellent design is ever created on a whim. It’s never luck. I’d say 80% of a designer’s work effort will never be directly seen, touched or heard by the user who buys or uses the finished product or service. That effort goes to identifying, questioning, explaining and deciding what the remaining 20% needs to focus on: finding out the right why to design for and getting to know the people who to design for, before even starting to think about the what.
Every thought, desire, and motivation researched, mapped, and prepared for. Every decision, contact, emotion, and action anticipated and accounted for. All with the aim to orchestrate a certain kind of experience that leaves a personal, emotional imprint. It’s this rigorous work done in the background that makes the final design feel effortless, sophisticated and pure. Because, as the user or the customer, you’re experiencing only those few key things that you’re meant to.
Your product needs a weight loss program
In 1955, Dieter Rams wrote his well-known ten commandments of good design. One of these commandments goes “good design is as little design as possible”. Times have changed, but the principle has never been more relevant. Today, we have so much that we want our users and customers to see and interact with that it’s never difficult to fill a screen with information, interactive elements, and a bunch of features. It’s an embarrassment of riches. You’re going to need to leave stuff out to make it better.
But I’m not talking about having fewer buttons in the user interface or using a more limited colour palette – I’m talking about checking the pulse before giving your thumbs-up for another year of life support.
What if the real problem has nothing to do with the user interface looking outdated?
What if the feature you’ve been working on so hard adds nothing to the experience?
What if instead of giving it a facelift you’d get rid of the whole thing?
Wait, why are we doing this again?
The question of why is a powerful tool. It forces people out of autopilot mode and dares them to look at the big picture again. Make the why into a habit in the project, and it becomes a knife that starts separating the fat from the meat. You start to see more of the core. The things that matter and have real value.
But the deeper you carve with the why, the harder the decisions become.
Making the decisions to focus only on those few things that matter the most, and doing just those superbly – that’s the hardest job there is. Bubbles will burst, and a lot of people will get uncomfortable. There will be compromises and disagreements. But being the best at one thing means not doing the other thing at all.
In design, the devil isn’t in the detail, but the things that aren’t there. It’s in the choices that aren’t offered, and the white space that doesn’t make people think. In the end, it’s often the absence of the things that defines the design.
The next time you face an excellent design, think of the things that aren’t there. Only then can you truly appreciate the things that are.
By now, it has become an annual tradition at Gofore to conduct a Project Radar survey at some point of the year to gain better insight into our presently running software projects. The 2018 Gofore Project Radar builds on two previous Project Radar iterations, conducted in fall 2016 (in Finnish only) and spring 2017, containing a set of questions relating to currently employed tech stacks, development practices and projected (or hoped-for) technological changes. Most of the questions from last year’s Project Radar made their way into this year’s Project Radar to allow for year-on-year variation detection. We also added some new questions that were considered important enough to warrant their inclusion in the survey.
So with the 2018 Project Radar results in, what can we tell about our projects’ technological landscape? What can we say has changed in our technological preferences and project realities over the past year?
The Gofore Project Radar survey results for 2018 are in! [Click on the image to enlarge it].
Over the past few years, the frontend development scene has shown intermittent signs of “framework fatigue” as a steady stream of new frameworks, libraries and tools has flooded the scene, challenging developers to work hard to keep pace with the latest developments, current community preferences and best practices. A look at our Project Radar data tells us that at Gofore there has been no significant churn when it comes to primary frontend technologies employed by individual projects. Instead, the results indicate a distinct consolidation around React, Angular and Vue.js, the three major contenders in the JS framework race. All these three have gained ground on older frontend techs (AngularJS, jQuery etc.) and ratcheted up their project adoption percentage, React being the top dog at a near-50% adoption rate among projects represented in the survey. If given a chance to completely rewrite their project’s frontend, most respondents would, however, pick Vue.js for the job.
The fact that there was no major change from last year in preferred frontend frameworks is perfectly in line with developments (or lack thereof) on the frontend scene over the past year. While last year saw major releases of both React and Angular roll out (with Vue.js 3.0 still somewhere on the horizon), there were no new frameworks to come along that would have been able to upset the status quo and catch on big time in the community (regardless of distinct upticks of interest in at least Svelte.js and Preact). This stability comes in stark contrast to the unsettled years in the not-too-distant past when the balance of power between different JS frameworks was constantly shifting as new frameworks and libraries appeared on the scene.
Looking beyond the battle of JS frameworks, a significant trend with regard to frontend development is the ever-increasing share of single-page applications among our projects’ frontends. Around 64% of this year’s Project Radar respondents reported to be working with single-page applications, up from 57% in last year’s Project Radar results.
Node.js on the rise
Moving our focus to the backend, where Java has traditionally held a predominant position among our projects, a somewhat different trend emerges. While the Project Radar data clearly brought out a tendency toward consolidation around the three major frontend frameworks, the picture on the backend side, on the other hand, looks a little more fragmented. Last year’s Gofore Project Radar pegged Java usage at nearly 50% among all projects represented in the survey, trailed by Node.js and C# each with a 15% share of the cake. While Java still came out on top this year, it was reported as the primary backend language in only 32% of the projects, down a whopping 15 points from last year’s results.
This drop was fully matched by an upward surge by Node.js, which more than doubled its share of the overall pie, up 17 points from last year. While C# stood its ground at close to 15%, a crop of new languages, missing from previous years’ results, entered the fray in the from of Kotlin, Clojure and TypeScript. Regardless of there being only a handful of projects where they were reported as being primary backend languages, they contributed to the growing share of minority languages in our backend landscape, a group previously comprised of Scala, Python, Ruby and PHP.
Similarly to how respondents were asked to choose their hoped-for replacement tech for their frontends, we also asked our developers what was their preferred language for rewriting their backends if given the chance to do so. Last year most respondents would take the cautious approach and stick with their previously established backend languages. This year, however, there was considerable interest in rewriting backends in Kotlin, particularly among respondents who reported Java as their primary backend language (55% of all respondents were eager to switch to Kotlin from some other language).
Before drawing any conclusions from these statistics, it should be noted that upwards of 55% of respondents reported to be working with a microservices-type backend stack, suggesting that potentially multiple languages and server-side frameworks might be used within a single project. Still, the appeal of Kotlin, particularly among Java developers, is clearly apparent, as is the shift toward Node.js being the centerpiece of most of our backend stacks.
The popularity of Kotlin, on the other hand, has been picking up ever since Google enshrined it as a fully supported language for Android development. Considering its status as one of the fastest-growing programming languages in the world, its increasing presence in server environments is hardly surprising.
Now where do we run our project infrastructure in the year 2018? According to last year’s Project Radar results, more than two thirds (68%) of all respondents were still running their production code in a data center that was managed either by the client or a third party. This year, that number had come down to 59%. While this isn’t particularly surprising, what is mildly surprising, though, is the fact that IaaS-type infrastructure saw an even greater decline in utilization. Only 47% of all respondents reported to be running their production code in an IaaS (Infrastructure as a Service) environment, as opposed to 60% last year.
As the utilization of both traditional data center environments and IaaS services fell off, PaaS (Platform as a Service) and, especially, serverless (or FaaS, Function as a Service) platforms were reported to take up a fair portion of the overall share of production environments. While still in the minority, PaaS services were reported to be used by 12% of all respondents, tripling their share of 4% from last year, and serverless platforms by 16.5% of all respondents (no reported usage last year as there was no dedicated response option for it).
As our projects’ production code is more and more removed from the actual hardware running it, containerization has also become more commonplace, as evidenced by the fact that Docker is now being used by 76% of all respondents (up from 43% last year). Despite Docker’s increasing adoption rate, there wasn’t much reported use for the most popular heavy-duty container orchestration platforms: Kubernetes, Docker Swarm, Amazon Elastic Container Service and OpenShift Container Platform were only reported to be used by 14% of all respondents.
Since running our code in cloud environments enables shorter deployment intervals, one could think we’d be spending more time flipping that CI switch that kicks off production deployment. And to some extent, we do: we have fewer projects where production deployments occur only once a month or less often (10% as opposed to 20% last year), but, somewhat surprisingly, fewer projects where production deployments are done on a daily basis (10.5% vs 12% last year).
- Key-value databases doubled their reported project adoption (32% vs 16.5% last year)
- Jenkins was the most prevalent CI platform among represented projects, with a 57% adoption rate (its closest competitor, Visual Studio Team Services/Azure DevOps well behind at 17%)
- Close to nine percent of all respondents reported to be using a headless CMS (Content Management System)
- Ansible was being used by 71% of respondents who reported using some configuration management (CM) tool, clear ahead of any other CM tools (Chef was being used by a little shy of eight percent of CM tool users, while Puppet had no reported users)
- Development team sizes were smaller than last year (57% of dev teams had five or more team members last year, whereas this year such team sizes were reported by 52% of respondents)
- The reported number of multi-vendor teams was smaller than last year (41% vs 47% last year)
- Most respondents reported to be working on a project that had been running 1-3 years at the time of responding
- Most project codebases clock in at 10k – 100k in terms of LOC (lines of code)
- Scrum was the most favored project methodology, being employed by nearly 51% of all represented projects. Kanban, on the other hand, saw the most growth of any methodology (22% vs 12% last year)
Some closing thoughts
Once again, the annual Project Radar has told us a great deal about our preferred programming languages, frameworks, tooling and various other aspects of software development at Gofore. While the survey is by no means perfect – and I can easily come up with heaps of improvement ideas for the next iteration – the breakdown of its results enables us to more easily pick up technological trends in our ever-increasing multitude of projects. This year’s key takeaways are mostly reflections of industry trends at large, but there are some curiosities that would be hard, if not impossible, to spot if not for the Project Radar. The usefulness of these findings is debatable, as some of them fall under trivia, but still they come as close to a “look in the mirror”, technology-wise, as one can get at a rapidly growing company of this size.
I wrote a blog post last year about how bots are used to automate routine work in our company (Gofore). The same topic is even more relevant today when we are stepping into an era of AI. Let’s see what has happened to our bots since my last blog.
30 little bots
Today we have around 30 active bots that integrate to Slack. Almost half of these slackbots are focused on utilisation and billing functions. Reliable utilisation and billing are a consulting company’s engine oil that enables all other activities. These bots control peoples’ marking hours, calibrate utilisation capacities, remind to bill customers and recognise human errors. Utilisation and billing were also the first automated functions.
The other significant group is reporting slackbots. All companies have a lot of business-critical information that needs to be made visible to employees. Slackbots list, for example, customer statistics, site-based information and the most impacted blog and social media posts. These slackbots also can be used on-demand.
The third group of slackbots is everything else. We have an overtime bot, SLA-observer bot and bots for the sales team. One slackbot updates users’ vacation statuses and the other connects people for a beer.
In God we trust, others bring bots
Basically, a bot is a piece of software that performs automated tasks. Despite this, bots have advantages that many other applications are missing. I have listed the three most important ones.
A Slackbots’ best asset is simplicity because a bots’ user interface is mostly text and icons. In the same way, interaction with them is based on text and not graphical forms or other UI elements. Some bots are totally invisible for users and just run in the background.
The second advantage is a bots’ overall popularity. Many users have used bots previously, hence a bot’s behaviour is well-known. For this reason, intensive training and user guides can be avoided. Bots’ messages are displayed in different slack channels continuously, so promotion also happens naturally.
The third advantage is the Slack platform. Slack provides a smooth user experience, out-of-the-box services (security, authentication, performance, data storage etc.), wide device support and excellent integration options. Although all our bots are handmade, Slack has speeded up our development enormously.
Value for life
The value proposition is the reason that the product exists; this can be summarised in three points in our case. Better job satisfaction means bots take care of boring and repetitive tasks and let people work on meaningful and interesting duties. The cost saving aspect focuses on time-saving and error sensitive functions. Practically, our bots have replaced a big part of middle management tasks already. Improved decision-making means that business-critical data is visible for everybody 24/7. Every new bot idea is validated and prioritised against these three factors.
Some months ago, our bot team created an internal survey regarding how people feel about our slackbots. The results were very promising – 95% of people think that the bots are useful and 30% of people think that the bots are vital to the company. This feedback gave an extra boost and motivation to the whole team to continue development work.
Work in progress
My estimate is that our company still have around 20-30 manual processes that can be easily automated by bots. Parts of the recruiting process, subcontractor management, credit card administration and device handling, just to name a few. After this low hanging fruit has been picked, it’s time to add more AI to bots.
The outcome of many internal projects is mediocre. In contrast, bots bring value to our company every single day. When it has been said more than once that these bots are actually part of our company’s competitive advantages, you know that product development has reached a goal.
Juhana Huotarinen – the proud Product Owner of the Gofore Bot Team
In my last blog post I shared my ideas about some nice features our meeting room system should have – one was measuring air quality in meeting rooms. Soon after publishing the blog post, I got a call from Mika Flinck from Digita who offered a helping hand to develop this feature. After the call, Digita sent two Elsys ERS-CO2-sensors, which work on Digita’s Long Range Wide Area Network (LoRaWAN), for us to use for developing and testing purposes. The sensors can measure a room’s temperature, moisture, level of lightness and carbon dioxide (CO2).
One of the Elsys ERS-CO2-sensors in Tampere.
LoRaWAN is a wireless Low Power, Wide Area Network (LPWAN) networking protocol which is administrated by the LoRa Alliance association. IoT-devices, which work in LoRaWAN, can have batteries that last up to 10 years due to the of low powered technology and typically devices send messages to the network infrequently, like every 15 minutes.
Architecture of the current solution.
In Digita’s LoRaWAN all messages and commands are handled via Actility Thingpark which works as a gateway between LoRaWAN and the Internet. In our case, Actility Thingpark will resend all messages in the JSON-format from LoRaWAN to Amazon Web Services’ (AWS) API Gateway. After that, the API Gateway sends messages to Lambda which decodes the Elsys payload and the decoded information is finally sent to our meeting room system in EC2. All client systems can get updated information from the server.
What is good room air quality?
For the meeting room system, I used several sources for gathering ideal values of good air quality. I preferred using information from the Finnish Institute of Occupational Health (FIOH) and The Organisation for Respiratory Health, which contained recommendations for air temperature and moisture according to seasonal and weather conditions. Also, working conditions give some frames for good room air quality. I used the following values for our meeting room system.
Carbon dioxide (PPM)
|Good||25 – 45||< 800||20 – 23|
0 – 25 or
45 – 70
|800 – 1150||
19 – 20 or
23 – 25
|Very bad||> 70||> 1150||< 19 or > 25|
Limits are averaged from several sources and our daily work is in an office environment. Now the meeting room tablets can visualize the level of each metric using different colours. In the future, we will develop a feature in which all the limits are drawn on the timeline graphs and visualize any points exceeding these limits.
LoRaWAN-sensors are very easy to handle, just configure and forget. In the ideal situation, you must change the battery in the sensor after a few years and nothing else needs to be done. Now the meeting room systems have configuration support where we can determine in which room the sensor is located. When a sensor is moved to a new location, we just link the sensor to the new room.
For measuring air quality, I have a vision of bringing peoples’ subjective opinions which will be combined with sensor data. This will make us smarter in terms of what is good air quality, especially when taking into account how many people were in the room. Maybe someday our Seppo-bot can ask a few simple questions after the meeting.
Big thanks for Mika Flinck from Digita for lending us the LoRaWAN-sensors for developing and testing purposes! This was a great opportunity to learn about LoRaWAN and develop our meeting room system further.
I came to Gofore as a trainee six months ago. Now that I have started as a full-time employee here in the growing cadre of data-oriented people, I can shed some more light on how a former physicist found his way to an IT consultancy company.
My journey to Gofore started at the end of 2017. Having been unemployed for quite some time, I heard from a friend that he had found his workplace through a training/recruiting program at Saranen Consulting. I had been vaguely aware of the existence of these programs, but after this I started keeping my eyes open for one that would fit my professional profile. Sure enough, at the end of the year, I noticed Saranen had a program starting in early 2018 called AnalyticsPro. With a focus on developing competency in the field of data analytics, this seemed to be right up my alley. The program consisted of some training days and, most importantly, working as a trainee in a company doing real work for the duration of the program. Just what I had been looking for: an opportunity to do actual work in the field of data analytics and show that I can deliver real results. I attended an information event about the program mid-January, and, convinced that this would help me find a career, I sent in my application.
A variety of skills
The process started well for me. I received an invitation to an interview at Saranen a few days after submitting my application for the program. The first interview was a group interview with about two dozen people participating. When we were going around the table, each person introducing themselves, I was quite amazed at the variety of skills people were bringing to the program. There were coders, mathematicians, engineers, marketers. There was even one former professional poker player. As there were many people, the interview was quite short, focusing on our strengths and personal development expectations. I remember being a little nervous about the event, hoping my scientific strengths would carry me through to the next step.
And proceed I did. The very next day I received an invitation to the second round of interviews, carried out as video interviews. So, I put on a nice shirt (wearing comfortable college trousers under the table), and answered a few questions into my laptop’s video camera. This was a new experience for me, and it took a few tries to get good enough videos for my liking. I sent the videos onward, again hoping for the best.
The next few weeks were a harrowing time for me. Time went on, the application period for the program ended, and the good people at Saranen were hard at work finding companies for all the people in the program. Though there were weekly information emails from Saranen reassuring me that I was still in the program, no further interviews at companies were coming my way.
Until the very end of February. I finally had interviews in two places: THL, a large governmental installation, and Gofore, a consultancy company that I had never heard of until this time. THL was the second interview; they didn’t seem too happy with me, and in the end, decided to proceed with someone else.
The contract was signed
At Gofore I had two interviews. The first one was more general and focused on the company and what my role here could be. The interview went well, and I would proceed to the second one. This was more technical with some analysis problems. I felt a bit clumsy with my solutions, but I got to the end and did convince the interviewers that I could contribute. I was in the program. The contract was signed by all parties and I started my traineeship.
I was a bit late to join the program, the hiring process having taken some time, and I missed the first few training days at Saranen. I joined the training days shortly before starting my time at Gofore, at the end of March. The first training I attended was for Hadoop and Spark in the cloud, very much big data. All in all, as the training consisted of single days dedicated to one technology/concept (the exceptions being three days for data visualisation with different BI-programs and two for web analytics). As a whole, they proved to be a good introduction to the wide field of data science today, with plenty of information and examples. As a tech-savvy hands-on kind of guy, I would have wished for the training to be a bit more challenging and deep, but I understand the training had to be suitable for people with various backgrounds. And it did provide plenty of information for anyone to go further on their own, were they interested in doing so.
I got a free hoodie!
On my first actual working day at Gofore, I got a backpack full of Gofore clothing (hoodie!), and had my induction at the company by my ‘people person’ (PP) and a culture coach, with a free lunch. I was also introduced to my mentor during the traineeship, Juho Salmi. My trainee project would be to analyse user data in the company internal personnel tool, Hohto, and develop analytics for various purposes. To this end I was attached to the Hohto team, to learn how to access the data and to see how Hohto was being developed.
As someone with an academic research background, this was a bit of a culture shock for me. I had no experience in software development, and even though I learned much about this work in the following weeks, I felt like I was not contributing much. I was mostly working with the data by myself, giving regular updates to my mentor about my process, and trying to follow the work of Hohto team. My biggest contribution was participating in their daily standups and saying something along the lines of “Still working on the data, nothing new to report”. Looking back, I feel this is the biggest area of development for Gofore concerning onboarding people with my background. Then again, I cannot say how things could have been done better at the time, as there were not many people around focusing on data analysis. And I was not left alone: I was in constant contact with my mentor and had regular checkups with my PP about my progress.
Things improved considerably for me at the beginning of June when I and a few other analytics oriented summer trainees who were rounded up by Juho to form the Gofore X team to work on internal proof-of-concepts. I was joined by Tommi, Max and Teemu, and we started a scrum of our own, with Meeri as our scrum master. For me, this was the time when things really started to fall into place. I was now surrounded by people working on subjects similar to my own, ready to discuss and comment on the work, and the sprint structure with dailys and weeklys gave structure and focus to our work.
Work was not the only thing that was flowing nicely at that point. After I got used to showing up at the office every day and started to get a feel for my surroundings, I came to like the company. I had a lot of freedom in my project, the people around me were professional and helpful, and the office and equipment were excellent. I had a good time with my work and training, I was learning a lot and even producing results with my analysis work (to be made public in the near future).
Looking forward to what the future brings
When things got flowing, summer went by surprisingly quickly, and it came time to finish and evaluate the traineeship period. Being a little stressed about the meeting with all the stakeholders that would decide my near future, I was relieved to hear everyone (including myself) was happy with my progress during these months here. There were also interesting sounding analytics projects lined up for the autumn in which I could participate, so it was unanimously decided I would continue working with data at Gofore.
And here I am now, a data scientist in an IT consultancy company. Looking forward to what the future brings!
Final addendum: As I was writing this blog post, a colleague got ill, and I volunteered to take his position in the team interviewing a potential new employee. I would administer the same interview task I worked on myself some months ago, having moved from one side of the table to the other. The circle was complete.