Using an ORM library is typically the default choice when using relational databases. Yet what if you don’t want to use an ORM? Despite making life easier in a lot of ways, ORM libraries have their limitations and quirks, and inevitably obscure your view into the resulting SQL, which often results in inefficient queries.
So let’s say you want to avoid using an ORM, but would rather not write raw SQL everywhere either. One option is to use a query builder library such as jOOQSquirrel or Knex. Query builder libraries provide a relatively thin abstraction over SQL, but make writing queries more convenient and – crucially – composable.
In the project I’ve been working on for the past year, we use Knex on top of PostgreSQL, which provides first-class JSON support. In this blogpost, I’d like to illuminate how we utilize some of PostgreSQL’s JSON functions to help with managing relations without an ORM.
First, let’s consider the following very simple data model:

Imagine that we want to fetch a person with id 1, along with the person’s hobbies, including the category of each hobby. The desired result should look something like this:

{
    id: 1,
    name: 'John Doe',
    hobbies: [
        {
            id: 1,
            name: 'Tennis',
            category: { id: 1, name: 'Sports' }
        },
        {
            id: 2,
            name: 'Minesweeper',
            category: { id: 2, name: 'Video games' }
        }
    ]
}

Using an ORM library such as Bookshelf (which internally uses Knex), one needs to only define the models and their relations, after which actually fetching the desired result is as easy as this:

Person.where('id'1).fetch({ withRelated: ['hobbies.category'] });

Knex has no concept of models, so accomplishing the same result would involve fetching the desired Persons, their Hobbies and Hobby Categories, and mapping them together manually in the service layer to provide the desired result. That’s a fair bit of work.
What we would like to do is to write one query that returns the entire desired object graph in the correct format. As it happens, PostgreSQL provides JSON building functions that help accomplish exactly this: row_to_json, which turns a single row into a JSON object with column names as keys, and json_agg which does the same for multiple rows and returns the results neatly in an array. Similar functionality can be achieved in MariaDB/MySQL, but not as conveniently.
Let’s first consider the case of fetching a Hobby and its associated Category. We need a query for each:

SELECT id, name FROM hobby_category WHERE hobby_category.id = hobby.hobby_category;
SELECT id, name FROM hobby;

Using the row_to_json-function, we can now get the Category as a subquery within the Hobby query:

SELECT name, id,
    (SELECT row_to_json(sq) FROM
       (SELECT name, id FROM hobby_category
            WHERE hobby_category.id = hobby.hobby_category) sq
    as category
FROM hobby;

The single row returned from the subquery is turned into an object and embedded within the main result, and we get the following result, exactly as we wanted:

{ id: 1, name: 'Tennis', category: { id: 1, name: 'Sports' } }

Now let’s consider the second case of fetching a Person and their Hobbies. This is a many to many relation, so we need to a use a join to a junction table in the subquery:

SELECT name, id FROM person;
SELECT name, id FROM hobby INNER JOIN person_hobby ON person_hobby.hobby = hobby.id WHERE person_hobby.person = person.id;

Now we embed the subquery like before, but using the json_agg-function instead:

SELECT person.name, person.id,
    (SELECT json_agg(sq) FROM
        (SELECT hobby.name, hobby.id FROM hobby
            INNER JOIN person_hobby ON hobby.id = person_hobby.hobby
            WHERE person_hobby.person = person.id) sq
    as hobbies
FROM person

Again, the results look exactly as we wanted:

{ id: 1, name: 'John Doe', hobbies: [{ id: 1, name: 'Tennis' }, { id: 2, name: 'Minesweeper' }] }

So we are now getting the results in the form we want with just SQL – all we need to do is to combine the two queries. Since we’ve seen the raw SQL, let’s translate it into Knex. First, let’s define a function for each table that returns the basic contents of the table:

// In the real world, these would be more complex queries, but the basic principle is the same: functions that return queries which can be composed and embedded in other queries
const getCategories = () => knex('hobby_category').select(['hobby_category.id''hobby_category.name']);
const getHobbies = () => knex('hobby').select(['hobby.id''hobby.name']);
const getPersons = () => knex('person').select(['person.id''person.name']);

We can then compose them together:

const category = getCategories()
    .whereRaw('hobby_category.id = hobby.hobby_category');
const hobbies = getHobbies()
    .select(knex.raw('(SELECT row_to_json(sq) FROM ? sq) as category', [ category ]))
    .innerJoin('person_hobby''person_hobby.hobby''hobby.id')
    .whereRaw('person_hobby.person = person.id');
const query = getPersons()
    .select(knex.raw('(SELECT json_agg(sq) FROM ? sq) as hobbies', [hobbies]))
    .where('person.id'1);

Running the query will return the following results:

{
    id: 1,
    name: 'John Doe',
    hobbies: [
        {
            id: 1,
            name: 'Tennis',
            category: { id: 1, name: 'Sports' }
        },
        {
            id: 2,
            name: 'Minesweeper',
            category: { id: 2, name: 'Video games' }
        }
    ]
}

Which is the exact result we originally wanted.
With a bit of work (the exact details of which are outside the scope of this blog), you can extend Knex’s query builder function to get the final query to look something like this:

const category = getCategories()
    .whereRaw('hobby_category.id = hobby.hobby_category');
const hobbies = getHobbies()
    .includeOne('category', category)
    .innerJoin('person_hobby''person_hobby.hobby''hobby.id')
    .whereRaw('person_hobby.person = person.id');
const query = getPersons()
    .includeMany('hobbies', hobbies)
    .where('person.id'1);

For anyone proficient in SQL, this is quite readable and fairly easy to pick up.
If you really wanted to, you could even define some specialized classes named after your tables that contain query logic as well as knowledge of relations that you’d commonly want to include, so you could eventually end up with something even more readable like this:

Person.where('id'1).fetch({ withRelated: ['hobbies.category'] });

Jokes aside, this basic approach has worked very well in my current project. Although using an ORM would undoubtedly have made basic queries simper to write, with PostgreSQL basic queries are not a problem regardless, and there’s a tendency for an ORM to become an obstacle whenever trying to achieve something more complicated. Although you can generally fall back to raw SQL or a query builder even with an ORM, sticking close to SQL by default makes you think of how to solve your problems with SQL in general, resulting in better queries and improved SQL skills for solving those complicated cases. If your ORM is causing you headaches, I’d gladly recommend sticking to a query builder and utilizing the capabilities of your DBMS.

Avatar

Risto Ollila

Risto is a full-stack software developer with an interest in clean software architecture and efficient, maintainable code. Currently a MSc student in Computer Science at Uni. Helsinki, his work of late has utilized a lot of Node, PostgreSQL and Vue.

Do you know a perfect match? Sharing is caring

TL;DR Prefer map and filter over forEach when you need to copy an array or part of it to a new one.
One of the best parts for me in the consulting line of work is that I get to see countless projects. These projects vary widely in size, the programming languages used and in developer competence. While there are multiple patterns that I feel should be abandoned, there is a clear winner in the JavaScript world: forEach to create new arrays. The pattern is actually really simple and looks something like this:

const kids = [];
people.forEach(person => {
  if (person.age < 15) {
    kids.push({ id: person.id, name: person.name });
  }
});

What is happening here is that the array for all people is processed to find out everyone aged less than 15. Each of these ‘kids’ is then copied to the kids array by selecting a few fields of the person object.
While this works, it is a very imperative (see Programming paradigms) way to code. So what is wrong? you might be wondering. To understand this, let’s first familiarize ourselves with the two friends map and filter.

map & filter

map and filter were introduced to JavaScript already quite a while ago as part of EcmaScript 5.1 in 2011. They are methods of arrays that allow a more functional style coding in JavaScript. As usual in the functional programming world, neither of the methods is mutating the original array. Rather they both return a new array. They both accept a single parameter that is of type function. This function is then called on each item in the original array to produce the resulting array. Let’s see what the methods do:

  • map: the result of the function called for each item is placed in the new array returned by the method.
  • filter: the result of the function called for each item determines whether the item should be included in the array returned by the method.

They also have a third friend in the same gang but it is a little less used. This friend’s name is reduce.
Here are simple examples to see them in action:

const numbers = [1, 2, 3, 4, 5];
const doubled = numbers.map(number => number * 2); // [2, 4, 6, 8, 10]
const even = numbers.filter(number => number % 2 === 0); // [2, 4]

Now that we know what map and filter do, let’s next see an example of how I would prefer the earlier example to be written:

const kids = people
  .filter(person => person.age < 15)
  .map(person => ({ id: person.id, name: person.name }));

In case you are wondering about the syntax of lambda used in map, see this Stack Overflow answer for an explanation.
So what exactly is better with this implementation:

  • Separation of concerns: Filtering and changing the format of the data are two separate concerns and using a separate method for both allows separating the concerns.
  • Testability: Having a simple, pure function for both purposes can easily be unit tested for various behaviours. It is worth noting that the initial implementation is not as pure as it relies on some state outside of its scope (kids array).
  • Readability: As the methods have clear purposes to either filter out data or change the format of the data, it is easy to see what kind of manipulations are being done. Especially as there are those functions of the same category like reduce.
  • Asynchronous programming: forEach and async/await don’t play nicely together. map on the other hand provides a useful pattern with promises and async/await. More about this in the next blog post.

It is also worth noting here that map is not to be used when you want to cause side effects (e.g. mutate global state). Especially in a case where the return value of the map method is not even used or stored.

Conclusions

Usage of map and filter provides many benefits such as the separation of concerns, testability, readability and support for asynchronous programming. Thus, it is a no-brainer for me. Yet, I constantly encounter developers using forEach. While functional programming might be a little scary, there is nothing to be afraid of with these methods even though they have some traits from that world. map and filter are also heavily used in reactive programming that is used nowadays more and more in the JavaScript world too thanks to RxJS. So next time you are about to write a forEach first think about the alternative approaches. But be warned, they might change the way you code permanently.

Avatar

Roope Hakulinen

As a lead software developer Roope works as team lead & software architect in projects where failure is not an option. Currently Roope is leading a project for one of the world's largest furniture retailers.

Do you know a perfect match? Sharing is caring

The very first internal conference of Gofore – titled, of course, Gonference – was held on Ascension Day (10th of May). In this blog post, I’ll introduce how the Dev track of the conference looked along with the videos of some of the talks. I had the pleasure to curate and host the Dev track. If you want to read more about the Gonference in general, you can read what one of the main organizers, Jarkko Koistinaho, had to say about it here.

 

Dev track scope

One of the hardest parts of organizing the Gonference was the selection of track themes and scoping the presentations. The organizing committee received so many excellent topics within a wide range of areas that division was not an easy task. After a lot of pondering the results were tracks named Design & Leadership, DevOps and Dev. The split between Dev and DevOps was still a little obscure but the organizing committee drew an arbitrary line between the tracks. Based on the feedback the line seemed to work quite well.

Presentations

Even though the idea was initially not to publish any presentations outside of Gofore, it was soon clear that the quality of talks was way too high to be kept within the company. Thus, after the Gonference we decided to publish some appetizers for you to enjoy.

 

Presentation slots were a 20-minute presentation followed by 5 minutes of questions and discussion. Though, two presentations on this track were a little bit more in-depth so they were given double the time for both sections. In total there were four shorter and two longer presentations. Out of the six presentations we have the pleasure to share three with you.

 

The very first talk of the Gonference’s dev track was given by one of Gofore’s top software architects, Jari Timonen. Knowing the person it was not surprising that the topic was related to the Java EE world with the title being “Taming a dinosaur: Introduction to Jakarta EE (formerly Java EE)”. Especially for us young hipster programmers, Jari’s presentations around the topic are really eye-opening and educational. You can see the presentation below.

 

One of the more in-depth presentations was given by Teemu Erkkola on a topic we’ve heard discussed so many times within the company – Rust. Titled  “A tour of Rust”. The presentation gives an excellent, hands-on introduction to what Rust looks like and why it is such a compelling choice nowadays for multiple purposes. It also features an enormous amount of live coding for those who especially enjoy that. Check it out below!

 

 

While many of the talks were quite heavy on technical details, Arno’s presentation on  “Exploring the possibilities of virtual environments”  gave a nice balance on the day by talking about the possibilities of Virtual Reality while demonstrating what we have already done on the field. The presentation also contains a live demo on how to get started with some VR stuff without prior knowledge. Here it is.

 

Conclusions

In general, the Gonference was a great success and the feedback gathered was really positive. There were also many great improvement ideas for the next time. And the next time will most probably be happening. This kind of event is an excellent way to balance the ratio of project work and personal development by providing high-quality speeches extremely related to the topics we all at Gofore wrestle in our day to day work.
All our videos published on our YouTube channel can be found here
Avatar

Roope Hakulinen

As a lead software developer Roope works as team lead & software architect in projects where failure is not an option. Currently Roope is leading a project for one of the world's largest furniture retailers.

Do you know a perfect match? Sharing is caring

I recently had to implement Amazon Cognito JWT authentication with Spring Boot, and since I wasn´t able to find any clean & simple guides with quick googling, I decided to write my own. In this blog post, I´m going to walk you through the steps of the process I used.
We are going to implement a Spring boot application that is able to authenticate the user against Amazon Cognito using OAuth 2.0 authorization code grant and JSON Web Tokens. All code examples are written in Kotlin.
This post is not going to cover Cognito itself. I expect you to know what Amazon Cognito is and how to configure it.
NOTE: This is a practical guide with lots of code examples. For the sake of simplicity, the code contains only the necessary components for the authentication to work, and you are expected to add more features for it to be safe & efficient in a production environment.
Ok, so let’s get started.

Settings & dependencies

First, we need to set up some dependencies. Spring boot starter web & security are pretty obvious, and Nimbus JOSE + JWT is a library which we are going to use to handle the JSON Web Tokens.

build.gradle
1
2
3
4
5
compile('org.springframework.boot:spring-boot-starter-web' )
compile('org.springframework.boot:spring-boot-starter-security')
compile('com.nimbusds:nimbus-jose-jwt:5.12')

 
Next, let´s define some properties:

application-properties.yml
1
2
3
4
5
6
7
8
9
10
11
12
urls:
  cognito:    # cognito root auth url
endpoints:
  authorize: ${urls.cognito}/oauth2/authorize?response_type=code&client_id=${cognito.client}&redirect_uri=${cognito.callback}
  token: ${urls.cognito}/oauth2/token
cognito:
  client:     # cognito client id
  secret:     # cognito client secret
  callback:   # valid callback url set in cognito
  keys:       # url for cognito jwt keys

Here we specify:

  • Base URL for Cognito authentication
  • Endpoint URLs for authorization and token requests
  • Cognito client_id
  • Cognito client_secret
  • Cognito callback_uri
  • URL of Cognito public keys

You´ll get all these values from your Cognito configuration.

Authentication

The next step is to define a processor bean for tokens and configure it to use the specified keys URL as a key source. This bean is responsible for processing and verifying the token, and extracting the authentication details. Most of the work is done under the hood, so not much manual configuration is needed at this point. Basically, we just need to set the key source and algorithm (which is RS256 in this example).
example).

JwtProcessor.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
@Configuration
class JwtProcessor {
    @Value("\${cognito.keys}")
    private val keySource: String = ""
    @Bean
    fun configurableJWTProcessor(): ConfigurableJWTProcessor<*> {
        val resourceRetriever = DefaultResourceRetriever(50005000)
        val jwkSetURL = URL(keySource)
        val keySource: JWKSource<SecurityContext> = RemoteJWKSet(jwkSetURL, resourceRetriever)
        val jwtProcessor: ConfigurableJWTProcessor<SecurityContext> = DefaultJWTProcessor()
        val keySelector = JWSVerificationKeySelector(JWSAlgorithm.RS256, keySource)
        jwtProcessor.setJWSKeySelector(keySelector)
        return jwtProcessor
    }
}

We also need to set up a filter which filters all our authenticated requests, extracts the token from headers, and sends it for processing. The filter is also responsible for denying any requests that don´t contain a valid token. We first try to extract the token from the Authorization header and then extract the actual authentication and claims. If the token is valid we then manually set the Spring Security Context and let the request go forward. We´ll also catch any exceptions thrown by the processor in case the token is not valid, and respond with 401.

For the sake of simplicity in this tutorial, we implement a quick and dirty CognitoAuthenticationToken class for spring security:

CognitoAuthenticationToken.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class CognitoAuthenticationToken(
    private val token: String,
    details: JWTClaimsSet,
    authorities: List<GrantedAuthority> = listOf()
) : AbstractAuthenticationToken(authorities) {
    init {
        setDetails(details)
        isAuthenticated = true
    }
    override fun getCredentials(): Any {
        return token
    }
    override fun getPrincipal(): Any {
        return details
    }
}

And then the actual filter:

AuthFilter.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class AuthFilter(
    val processor: ConfigurableJWTProcessor<SecurityContext>,
    authenticationManager: AuthenticationManager
) : BasicAuthenticationFilter(authenticationManager) {
    override fun doFilterInternal(
        req: HttpServletRequest,
        res: HttpServletResponse,
        chain: FilterChain
    ) {
        try {
            val token = extractToken(req.getHeader("Authorization"))
            val authentication = extractAuthentication(token)
            SecurityContextHolder.getContext().authentication = authentication
            chain.doFilter(req, res)
        catch (e: AccessDeniedException) {
            LoggerFactory.getLogger(this.javaClass.simpleName).error("Access denied: ${e.message ?: "No message"}")
            res.status = 401
            res.writer.write("Access denied")
        }
    }
    /**
     * Extract token from header
     */
    private fun extractToken(header: String?): String? {
        val headers = header?.split("Bearer ")
        return if (headers == null || headers.size < 2) {
            null
        else {
            headers[1]
        }
    }
    /**
     * Extract authentication details from token
     */
    @Throws(AccessDeniedException::class)
    private fun extractAuthentication(token: String?): CognitoAuthenticationToken? {
        if (token == null)
            return null
        return try {
            val claims = processor.process(token, null)
            CognitoAuthenticationToken(token, claims)
        catch (e: Exception) {
            throw AccessDeniedException("${e.javaClass.simpleName} (${e.message ?: "No message"})")
        }
    }
}

Now that we have working processor and filter, we can implement the configuration for spring security as follows (note that we want our /auth -endpoints to be unprotected since they are used for the actual authentication requests):

AuthConfig.kt
1
2
3
4
5
6
7
8
9
10
11
12
@EnableWebSecurity
class AuthConfig(val processor: ConfigurableJWTProcessor<SecurityContext>) : WebSecurityConfigurerAdapter() {
    override fun configure(http: HttpSecurity) {
        http
            .authorizeRequests()
            .antMatchers("/auth/**").permitAll()
            .anyRequest().authenticated()
            .and()
            .addFilter(AuthFilter(processor, authenticationManager()))
            .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
    }
}

Next, we need to define the user endpoints for authentication requests. We need two endpoints: one for redirecting the user to the Cognito login form (which after successful login redirects the user to callback uri with authorization code), and other for retrieving the actual token with the authorization code. This way the client does not need to know almost anything about Cognito beforehand. Redirect to login form is handled with a basic redirect response, and the token is retrieved by sending a POST request to Cognito´s /oauth2/token -endpoint with the authorization code and client id. We´ll set up a service for that. We also need a model for Cognito JWT:

CognitoJWT.kt
1
2
3
4
5
6
7
data class CognitoJWT(
    val id_token: String = "",
    val access_token: String = "",
    val refresh_token: String = "",
    val expires_in: Int = 0,
    val token_type: String = ""
)
AuthService.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
@Component
class AuthService {
    @Value("\${endpoints.token}")
    private val tokenUrl: String = ""
    @Value("\${cognito.client}")
    private val clientId: String = ""
    @Value("\${cognito.secret}")
    private val clientSecret: String = ""
    @Value("\${cognito.callback}")
    private val callbackUrl: String = ""
    /**
     * Get token with authorization code
     */
    fun getToken(code: String): CognitoJWT? {
        val client = RestTemplate()
        val headers = LinkedMultiValueMap<String, String>()
        val auth = "$clientId:$clientSecret".toBase64()
        headers.add("HeaderName""value")
        headers.add("Authorization""Basic $auth")
        headers.add("Content-Type""application/x-www-form-urlencoded")
        val req = HttpEntity<Nothing?>(null, headers)
        val url = "$tokenUrl?grant_type=authorization_code&client_id=$clientId&code=$code&redirect_uri=$callbackUrl"
        return  client.postForObject(url, req, CognitoJWT::class.java)
    }
}
AuthController.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@RestController
@RequestMapping("/auth")
class AuthController(val authService: AuthService) {
    @Value("\${endpoints.authorize}")
    private val authorizeUrl: String = ""
    /**
     * Redirect user to correct url for authorization code
     */
    @GetMapping("/login")
    fun login(): ResponseEntity<Any> =
        ResponseEntity
            .status(HttpStatus.SEE_OTHER)
            .header(HttpHeaders.LOCATION, authorizeUrl)
            .build()
    /**
     * Get aws tokens with authorization code
     */
    @GetMapping("/token")
    fun token(@RequestParam("code") code: String): CognitoJWT? =
        authService.getToken(code)
}

Accessing the claims

Now all that is left is to access the token claims inside the application. We´ll specify a model class for the claims and update our AuthService with a method for extracting these claims from security context (note that the claims must, of course, match the ones you´ve set up when configuring Cognito).

TokenClaims.kt
1
2
3
4
5
6
7
8
9
data class TokenClaims(
    val uuid: String,
    val auth_time: Long,
    val issued: Date,
    val expire: Date,
    val name: String,
    val cognitoUserName: String,
    val email: String
)
AuthService.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
fun getClaims(): TokenClaims {
    val authentication = SecurityContextHolder.getContext().authentication
    val details = authentication.details as JWTClaimsSet
    return TokenClaims(
            uuid = details.getStringClaim("sub"),
            auth_time = details.getClaim("auth_time") as Long,
            issued = details.getClaim("iat") as Date,
            expire = details.getClaim("exp") as Date,
            name = details.getStringClaim("name"),
            cognitoUserName = details.getStringClaim("cognito:username"),
            email = details.getStringClaim("email")
    )
}

To access the claims anywhere inside spring context, we simply inject our AuthService class and use the getClaims method. Let´s test it by writing a basic controller which returns the extracted claims back to the client:

UserController.kt
1
2
3
4
5
6
7
8
@RestController
@RequestMapping("/user")
class UserController(val authService: AuthService) {
    @GetMapping("/me")
    fun getCurrentUser(): TokenClaims {
        return authService.getClaims()
    }
}

Logging in and accessing protected endpoints

You should now be able to log in and access the protected /user/me -endpoint.
 
Steps for logging in and using protected endpoints:
1. Send GET request to /auth/login.
2. Follow the redirect and login to Cognito to get the authorization code.
3. Send a GET request to /auth/token?code={code}, and copy the id_token parameter from the response.
4. Send a GET request to /user/me containing Authorization header with value ‘Bearer {id_token}’.
 
Code for a fully working demo is available at:
https://github.com/akselip/spring-cognito-demo

Avatar

Akseli Piilola

Senior Security Consultant. Experienced programmer with multiple languages. Mostly working with penetration testing, cryptography and authentication systems.

Do you know a perfect match? Sharing is caring

Summer beach
I just returned from my extended summer holidays – eight and half weeks of combined parental leave and vacation. What made it special is that I took the work-life -balance to heart and did nothing work related during that time.
This means:

  • no skimming through emails
  • no reading the intranet “just in case”
  • no keeping in touch with projects
  • and no discussing work-related stuff with peers

Why?

I decided to push my younger one’s daycare a few weeks into the future, and stay the whole summer at home taking care of my kids. I’m a bit of a workaholic and easily distracted, so I didn’t want my work to suck up the time that I was supposed to use on my offspring.
The task of “not working” would not have been easy for me had I not prepared for it.

A graceful exit

I started taking a note of the stuff that was dependant on me a few weeks before the holidays. I then arranged for someone else to handle anything that would need to be sorted during my absence (mostly managerial stuff like billing). I also finished the tasks I was working on and checked that the project teams I worked in knew who to contact if they needed help. Generally, I just told people that if something was burning and there was no one qualified to put the fire out to contact me by phone (no one did).

Limiting access

On the last day before leaving for my holidays I used a password generator to create a new password for Active Directory and Slack with completely random letters/numbers and I pressed enter. I could not access my laptop, my mail, slack, intranet, nothing, and that was good as I had no intention to during the next 8 or so weeks. The passwords could be reset in the office premises when required (lesson learned here: on a Mac this is not as easy a process as I thought).

Socializing

I did not aim to isolate myself from my co-workers; I did it to remove unnecessary distractions so I could spend quality time with my kids. I did attend our summer party (Gibiza!) and even spent half an evening at our Gofore Helsinki penthouse when I was having a boys night out in the area. At those events, everyone was courteous enough to not discuss work-related issues with a vacationer, and not so surprisingly, there was still a lot to talk about.

Requirements

Personally, I think that the act of actually having a holiday on your holiday is the best thing since sliced bread. But it’s not for everyone. Even if you would like to keep your distance from work, it might not be possible. The reason I was able to enjoy my holidays in peace was not because I threatened people that I would be a total dick if disturbed, but because of the strength of our organisation. Goforeans, in general, are really eager to help and as competencies overlap there is always someone else to turn to. This meant that I could relax knowing that no project or customer would have been left in distress as there would be someone who would step in and help.
breaking free

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring