AWS Re:Invent 2018, day 4

Today was Werner’s turn and boy he didn’t disappoint. The keynote was packed with some very welcome announcements. Again some of the announcements might be missing from the post but those can be found on Twitter, AWS blogs and from the news.
Re:Invent
As usual, Werner used a good portion of the keynote to emphasize how critical it is to have control over ones’ infrastructure. To avoid “black boxes” and to prepare for the fact that “everything fails all the time”. By now this shouldn’t be a surprise for anyone and if your architecture is not taking this into account you might be in for some nasty surprises in the future. In order to help companies assume best practices, AWS has a so-called “Well Architected Framework”. This set of guidelines and best practices should be familiar to anyone who is using AWS. Those of you who have done the AWS certifications it is the foundation to learn. Now AWS has come up with “Well Architected” self-service testing tool. It can be used to assess how well your development, operational and governance practices are aligned with the “Well Architected Framework”.
However the announcements today were mostly about serverless computing, namely AWS Lambda. There were some huge updates announced like custom runtimes, layers, ALB support, service integrations with step functions and IDE toolkits. The abstraction level keeps on raising and serverless-computing is becoming more and more mainstream.
Re:Invent
To easily sum up all the announcements it is now possible to have your lambda-functions be called by ALB while lambdas themselves can be running Rust, C++ or even Cobol and code can be shared between your functions. Your step functions can interact with other services and you can debug your lambda functions. Additionally API gateway now supports websockets. Streaming data has also become mainstream (pun intended) and even though AWS has Kinesis they announced managed Kafka. Running Kafka at scale is no trivial task so this should be a relief for anyone using Kafka but not wanting to handle the maintenance it requires.
Building systems without any servers at all is now much more feasible and serverless should nowadays be given very careful consideration when starting new projects. It could be said that serverless is a valid option for new development activities and instead of prejudice it should be embraced since serverless/Functions as a Service has come to stay.

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 3

Re:Invent
Today was a big day. Wednesday morning is usually the time for Andy Jassy (CEO of AWS) to give his keynote. This was the case this year also. The keynote was full of different announcements and it will be quite a task to go through all of them. I’ll leave some of them out and also include some announcements that weren’t in the keynote.

AI&ML

A huge chunk of the talk was about ML. Like Google has their TPU-processors to run ML models, AWS today announced Inferentia processors which should be available next year. Google has a head-start of several years so it is interesting to see how AWS’s offering can match Google’s. In addition to processors there were all kinds of enhancements so if ML is your thing you should definitely read the AWS blog posts about the new features. One thing I’m going to “kehdata” (Sorry English speakers, ‘daring’ is a rough translation of the term, but in Gofore it holds much more meaning. Email me and I’ll explain the concept) is AWS DeepRacer. DeepRacer is radio-controlled car with atom-processor, video-camera, ROS OS and so on. Would definitely be fun way for people to practice ML and reinforcement learning.

DynamoDB on-demand

Traditionally DynamoDB tables must have had both read capacity and write capacity defined and performance was pretty static in a sense (assuming your data is modelled correctly and you know your access patterns). Then came autoscaling which automatically tunes read/write capacity values based on your traffic. And we have the option for on-demand billing. Based on the blog posts and documentation the on-demand option scales very well right from the start without the need to specify read/write capacity. The cost model is interesting and more closely matches for example Lambda’s model where you only pay for what you use. If your DynamoDB usage is spiky then on-demand might be a very good fit, whereas continuous, huge volume of traffic is much more cost-effective to run on traditional mode where you specify the performance limits yourself.

Re:Invent

AWS Control Tower

For several years the best practice has been to distribute applications/services/teams into different AWS accounts and furthermore segregate development, testing and production into different accounts. Natural outcome from this is the fact that the number of AWS accounts in organizations has exploded. So far it has been pretty much DIY-solutions when trying to get overall vision of all your accounts. The bigger the organization, the more they feel pain from this.
Today AWS announced Control Tower which aims to alleviate some of these problems. Automating the set-up of a baseline environment, Control Tower uses existing services like Config, Cloudtrail, Lambda, Cloudwatch, etc. Read more about Control Tower from product page: https://aws.amazon.com/controltower/features/
As an AWS partner our company has a huge number of accounts, so for us Control Tower is a very welcome improvement. We are investigating what it exactly brings to table and where you might still need custom solutions. Stay tuned for more blog posts concentrating solely on Control Tower. Currently it is in preview, so signup and a bit of luck is needed to get early taste of it.

Amazon Timestream

Cloudwatch metrics isn’t exactly new. It has existed a long time and is de-facto solution for metrics collection from AWS services. In addition to Cloudwatch it is very common to see InfluxDB or Prometheus on our clients (usually combined with Grafana for visualization of time-series data).
Today AWS announced Amazon Timestream, a managed time-series database. Targeted solely for time-series data this puts Timestream into direct competition against Prometheus, InfluxDB or Riak TS or Timescale. Naturally this is excellent news if you don’t want to manage servers and want to have your time-series database as a service. No more EC2 instances running Prometheus, no more DIY solutions for HA and so on. AWS mantra has long been that let the “undifferentiated heavy lifting” for them and concentrate on your application and business-logic. Timestream follows this idiom perfectly. Timestream is currently in preview so signup and a bit of luck is needed to test it.

Quantum ledger database

Quantum ledger database and managed blockchain. Well now we have all the buzzwords in one blog. AI/ML handled already and now it is time for blockchain. AWS announced to day two services loosely related to each other, both are currently in preview. Quantum ledger database is database with central trusted authority, immutable append only semantics with the complete history of all the changes ever made. What does it have to do with blockchain? Well, all the changes are chained and cryptographically verified. There is huge amount of use cases! In addition to quantum ledger database AWS also announced managed blockchain which supports Hyperledger Fabric and Ethereum (Hyperledger is first, Ethereum coming later).

CodeDeploy

There were other new features launched that might stay under the radar if the focus is only on the keynote. One that is very relevant for my current project is the CodeDeploy’s ability to do native blue/green deployments into ECS and Fargate. (more here: https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/)
This will definitely be tested out next week.

AWS App Mesh

Also one more nice announcement was AWS App Mesh. Envoy-proxy based service-mesh  for EKS, ECS and K8s running on EC2. Like other service meshes the idea is that applications or micro-services do not need to have in-built functionality for service discovery (and possible load-balancing or circuit breaking). Service mesh takes care of it and applications are simpler to implement. App Mesh is in preview but more information can be found on Github: https://github.com/awslabs/aws-app-mesh-examples
Like I said this is not definite list of all the new changes. There are literally tons of new things! Let’s see if Andy left any announcements for Werner tomorrow (hopefully so).
Re:Invent

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 2

Re:Invent
Things are moving fast. Day 2 included a Partner keynote and didn’t contain so much technical announcements. The news in the keynote was mostly about the AWS Marketplace.

Marketplace

AWS introduced “Private marketplace”. Private marketplace allows customers to create a customized catalogue of pre-approved products from the AWS Marketplace. This allows administrators to select only products that are authorized or otherwise meet the criteria decided by your organization. The Private Marketplace can be customized with custom branding. Logo, texts and colour scheme can be changed to match your organization. All controls set up by administrators for the Private Marketplace are applied across AWS Organizations.
This kind of customization and pre-approved catalogues of SKUs can be useful for bigger organizations who wish to have control over what gets deployed. However, using this kind of feature will require vigilance on your offerings through the Private Marketplace. Introducing too much command & control may have a detrimental effect on agility and speed the cloud provides.
In addition to the Private Marketplace, AWS introduced container products in Marketplace. These container-products can be run on ECS, EKS and Fargate and they come in either as task definitions, Helm charts or Cloudformation templates. This announcement brings both VM’s and containers as first-class citizens on the Marketplace and it also offers sellers new options on how to distribute their software.

Ground Station

The marketplace wasn’t the only new fascinating release. Ground Station is service which will communicate with satellites in orbit. This basically means that launching a satellite and talking to it can be accomplished with a very small amount of money compared to the past when in addition to launch costs you would have to build your own ground station (radios, antennas, etc). Universities, schools and companies can now launch satellites if they want to. Space technology is being brought to the public and this will hopefully help to create new innovations and products/services.
I have to admit that “Satellite Communications as a Service” (should it be SCaaS) wasn’t even on the list when I’ve wondered what AWS might publish during the week. There are some caveats in the service though! You will need a Federal Communications Commission (FCC) license and Norad ID of your satellite and you will need to contact AWS in order to activate the service so you cannot just arbitrary book antenna-time and start shooting radio messages to the sky.

CloudWatch Logs++

Amazon CloudWatch Logs Insights. These announcements bring Kibana-like features to Cloudwatch. It can read multiple formats and especially useful feature is that it autodetects field-names if your logs are JSON-formatted. This feature might reduce the need for ELK-stack. This brings a whole new level on Cloudwatch dashboards.

DynamoDB

Re:Invent
Finally, it is time to talk about DynamoDB. Today it was announced that DynamoDB now has transactions. Having transaction support makes it possible to use DynamoDB in huge amounts of new use-cases. Now DynamoDB is controversial subject especially among the developers (this is my experience, YMMV). Modelling your data into NoSQL-database is not always straightforward. Developers don’t usually have to care that much about data access patterns but modelling data so it fits nicely into DynamoDB access patterns are the first thing they have to think about. It has been my observation that developers tend not to like it.
If you want to know more I suggest that you watch this year’s DAT401 session on Youtube once it is available (DAT401 – Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB).

Other news

Re:Invent
– Amazon Comprehend now understand medical text
– A new service AWS Elemental MediaConnect for video ingestion and distribution
Day 3 will be huge since Andy Jassy’s keynote is in the morning and it will be packed with updates.

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 1

Re:Invent 2018
Now that Re:Invent is at full speed the flurry of new features is relentless. Let’s go through a couple of the most noteworthy announcements from Day 1.

IoT

IoT has received a lot of love.

  • IoT sitewise (preview) is targeting entire plants and industrial equipment instead of small sensors normally associated with IoT.
  • IoT events (preview) is targeted for event correlation between multiple sensors and helps to recognise system-wide events and also enables alerting in such occurrences.
  • IoT greengrass is extended with external app-connectors, hardware root of trust (using Hardware Security Modules or Trusted Platform Modules) and more.
  • IoT Things Graph (preview) is an easy way for developers to build IoT applications. IoT Things Graph hides low level details and enables packaging as reusable components.
  • Also, Bluetooth Low Energy is now supported in Amazon RTOS (beta).

So overall there were quite a few announcements in the IoT space. If you are doing IoT there should some interesting features announced which makes life a lot easier.

AWS Transit gateway

AWS Transit Gateway
A new feature which allows users to connect their VPC’s and on-premise networks to a single gateway. Transit gateway acts as a centralised hub where VPC’s and on-premise connect as spokes. It includes support for dynamic and static routing. Since Transit gateway allows forwarding of DNS queries it is possible to resolve IP’s on other VPC’s that are connected to Transit gateway. In addition, there are monitoring, security and management using IAM and Cloudwatch. There’s also support for Equal Cost Multipath (ECMP) when routing via VPN connections to on-premise.
Overall Transit gateway is a huge step forward in networking. It makes creating complex topologies much easier. Especially enterprise-customers who might have multiple accounts used by multiple departments should now be able to create more uniform access to on-premise instead of connecting different VPC’s individually via VPN/Direct Connect.

AWS Global Accelerator

AWS Global Accelerator
If Transit gateway is useful for inter-VPC communications then AWS Global Accelerator is at least equally useful but targeted to the Internet. With Global Accelerator, applications can make use of the AWS global networking backbone. Global accelerator removes the need for managing different IP-addresses for different regions. Global Accelerator reserves 2 IP’s and anycasts on those globally. Traffic is directed to the AWS network in the nearest POP and from there it travels via the AWS network until it reaches its endpoint. Endpoints can be configured as different AZ’s or regions and are continuously health-checked. Global Accelerator greatly simplifies multi-region setups and provides smoother end-user experience.
This is definitely on my “gotta try it out”-list. One more step in making multi-region setups more common.

Nitros and more

With the new AWS hypervisor system called “Nitro” there is now a new instance type C5n featuring 100Gbps networking speed. Not much more about that can be said. More bandwidth is always good and for customers who are maxing out 10Gbps or 25Gbps this is a welcome relief.
Then we have a very interesting announcement. EC2 A1 instances. The interesting part is the 64bit ARM-processor with custom designed silicon called “Graviton”. That’s it – no x86. There are several Linux-distributions which can be run on these instances and it will intresting to see what kind of adaptation these machines will receive. Moving out of the AWS context it’s also interesting to see ARM-processors starting to take on areas normally dominated by x86 chips. Apple’s A10 chip and now Graviton from Amazon. Should Intel feel threatened? Time will tell.
Ever wondered what kind of server-fleet is running customers lambda-functions? Or Fargate-containers? Wonder no more since AWS has released “Firecracker” which is microVM for a running container. Will this technology find its way into other open-source projects?

Wrap up

Today’s announcements have been touching some very fundamental building blocks. Fundamentals have changed so much that developing multi-region applications or multi-account networking look a lot different than they did 24h ago.
More announcements and news are being released throughout the week. I’ll post again tomorrow tomorrow; let’s see what suprises AWS has prepared for us!

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 0

Before diving into the technical aspects and the new announcements I’ll take a moment and write a bit about the time before the actual conference. If you have never participated before there are a couple of ‘gotchas’.

Travel early…

Travelling from Europe is tiring and it’s better to arrive early to give yourself time to recuperate. Also when travelling from Europe remember that if you have a connecting flight inside the USA you will have to do the customs/CBP stuff when you first land. This combined with the fact that your luggage must be collected from baggage claim and re-checked into the domestic US-flight means that you should reserve enough time for your connection, otherwise you will experience added stress and potentially miss your connection.

When in Las Vegas… 

Remember that the whole city is designed specifically to separate people from the contents of their wallets. Everything costs and more often than not the price is not cheap. Las Vegas is in the middle of the desert and the air is dry, this is something you should take into account if you have sensitive skin. For me, the effect of the dry air is best visible on my beard. In Finland, it is usually much more curly due to more humid air. Here it straightens out considerably. I bet you wanted to know that 🙂
Las Vegas Boulevard aka ‘The strip’ isn’t that long on the map but it is long enough that moving between different venues takes time. If at all possible try to plan your schedule so that you minimize moving between venues. AWS has booked shuttle buses, there’s a monorail and you can walk but all the options take time and most of the time there will be a sea of other attendees moving in addition to you.
Also, contact other companies and people. There is a huge amount of different smaller gatherings and parties organized by different companies. The opportunity to network and to get to know people is huge. Attending your local AWS meetups will help you connect with others.
In the end, the conferences usually are best experienced first hand. The technical information can be learned from streaming videos and Youtube-videos. Being visible and networking is something that won’t be possible if you don’t attend. Furthermore, attending with only one person is overwhelming. Absorbing everything that is available is a huge task. Combined with networking and possibly having a booth is even more overwhelming. Consider sending more people, preferably 2-3 and if you have clients with you or are having a stand in the expo you need even more. Naturally, for a consulting business, this might be a pretty big investment. There are the costs of the trip itself (tickets, flights, hotels, per diem, etc) but in addition to these, the attendees are not doing billable work. So attending Re:Invent can also be seen as a commitment, you are committed to your partnership with AWS.

Actual announcements and news!

On Sunday, 25th of November the actual Re:Invent hasn’t started yet, however, there are some program items already on Sunday, more specifically the Midnight Madness and Tatonka challenge. Midnight Madness is a launch party or pre-party and Tatonka challenge is an event where attendees try to eat huge quantities of chicken wings. I had the advantage that I live in Tampere which is the wing-capital of Finland. Long story short: I didn’t win Tatonka but in addition to Tatonka and Midnight Madness there was the first official launch: AWS announced ‘AWS Robomaker’.
chicken challenge
Robomaker is intended to help developers creating robotic applications. AWS has an extended open-source framework “ROS” and included extensions so it includes connectivity to the cloud. Robomaker aims to be a complete development environment and includes an integrated development environment, simulation possibilities and fleet management.
Robotics is not an area which would come up in my daily work. However, if you are working in such field this new offering might be useful for you. I also hope that offerings like Robomaker will help different ecosystems to grow. Making robotics and robot-development accessible to a bigger audience will help innovation and might produce completely new products and offerings.
In addition to Robomaker, there were also some interesting announcements and new features that were published during the last few days. However, these might go unnoticed on the grand scale of Re:Invent. Here are some of the new features sampled by me (my listing is not comprehensive)
EFS infrequent storage class – coming soon. EFS will be getting an infrequent storage class much like S3 has. Naturally, this helps with cost-control and should be interesting to anyone using EFS.

  • Amazon Rekognition. Improved facial analysis, detects faces with greater accuracy and confidence. Should be interesting if your use case includes Amazon Rekognition.
  • AWS DataSync. New service to automate transferring data between on-premises storage and S3 or EFS. This service is mostly aimed at hybrid solutions and cloud migrations. Definitely, something to check out if you are working in the hybrid/migration space.
  • S3 batch operations – preview. Simplify the management of huge amounts of objects. Bulk operations are usually custom code, developed by AWS clients themselves. Batch operations aim to reduce the complexity that bulk operations usually require. Moving objects, replacing tags or managing access controls. Use cases are almost limitless ranging from compliance to backups to data migrations.

 
That’s it for Sunday in Vegas. Let’s see what Monday brings!

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

On Tuesday 13th November our strike force of four of Tampere’s Gofore cloud experts headed to Kistamässan – Stockholm’s own version of Silicon Valley.  With registrations made a month prior, we were eager to learn new things about Google Cloud, especially in the area of Kubernetes. This blog is a short and completely subjective report of what happened at the summit.

Keynote

Google Cloud Summit
As with all summits, we started with a keynote. This particular keynote was almost half an hour late which was a bummer, but understandable as some 2000 people were entering the room through two flimsy doors (talk about production bottlenecks). During the opening session, I think they walked in 8 different speakers (with a few making multiple appearances), but I have to admit that I lost count at some point. Altogether there was very little of what could be considered new information, it served as more of a primer for the different tracks forming the core of the summit which were:

  • modernize my infrastructure (hybrid cloud/IaaS/PaaS)
  • accelerate application development (serverless/kubernetes)
  • create intelligence from data (ML)
  • transform how teams work (Google Suite).

The message was clear though: in order to not get left behind in the evolving IT landscape, you have to embrace cloud.
I have to admit that I totally lost my concentration around two thirds into the session, but the presentations made by the first few Google employees left a good impression. Out of the tracks presented I chose to stick with “Accelerate application development”, as it was of most relevance to me. Yay for Kubernetes!

Accelerate application development – Building Serverless applications

Google Cloud Summit
The first session after the keynote was an entry-level introduction to what is serverless. Surprisingly McDonald’s (which didn’t strike me as a tech company, but isn’t every company a tech company nowadays?) did a joint presentation with Apegroup on how they created a global microservice architecture application using Google’s App Engine. Previously it took McDonald’s 18 months to get a new product into a single market, now with a smaller team they can get a product into 42 markets in the same time. The takeaway here (pun intended) was that big companies can, and should, utilize smaller software companies’ competence and agility in modern software development.

Accelerate application development – CI/CD pipelines in the cloud

Google Cloud Summit
The next session was left with the task of catching up on lost time, which showed in the speed that the slides flew by. The elevator pitch for this session was “Companies should be in the business of building features for users, not configuring and maintaining developer tooling”. I want this phrase emblazoned on our office wall! Google’s products to do CI/CD were introduced and they were:

  • Cloud Source Repositories
    • Basically, a private Git repo hosted on GCP which can be synched with an existing Git repo
  • Cloud build
    • Hosted build execution on GCP for building container images or non-container artefacts
  • Spinnaker
    • Open source continuous delivery platform
    • Multi-cloud CD platform
  • Grafeas
    • Store, query & retrieve critical metadata about software artefacts
    • easily add new metadata types and providers
    • query all metadata across all of your components in real time
  • Vulnerability scanning
    • Check your containers for vulnerabilities
    • Create policies that dictate which kind of images can go through your pipeline

The presentation ended with a demo of the Stackdriver Incident Response and Management tool (still in Alpha) which seemed like a viable solution for incident management on an app hosted in GCP. Another new product showcased was Stackdriver Profiler (in Beta) that gives an insight into what is happening in the code. The demo actually got me excited for the first time during this whole event, especially the part where Stackdriver Incident Response and Management showed an automatic correlation on a web page running slow and kubernetes cluster status. I want to try that product on some future project!

Accelerate application development – Best practices for securing virtualized and Kubernetes managed workloads on GCP (Palo Alto)

Google Cloud Summit
This session started with a talk which was kind of similar to AWS’s DDoS Whitepaper but with GCP terms and a few specialities such as GCP shared VPC. You might guess based on the sponsor that the session was oriented on threats and networking, which it was. Little to bring home back with me though.

Accelerate application development – Let go of your VM – Containers & Kubernetes, the next generation

Google Cloud Summit
I really hoped that I would have read the session description before queuing 10min to get back to my seat. In this session, it was explained thoroughly what is a microservice, what is kubernetes, what are containers and when you should use them. If you are still new to these terms, I think this kubernetes comic does the explanation quite well.

Accelerate application development – Kubernetes & Istio: the efficient approach to well-managed infrastructure

Google Cloud Summit
What is Kubernetes & what is GKE. At the end of the session, there were a few selling points of Istio.

Accelerate application development – Cloud security

Google Cloud Summit
This session was a quick look into Google cloud’s security. There was also a note about the importance of using security keys as a part of MFA-strategy (no security breaches in Google after implementing them). For me, the interesting part of this presentation was the case example from Mehiläinen. The interesting part wasn’t the architecture, or the application running in google cloud, but how Mehiläinen stored PII data in Google Cloud in the Finnish region (in Hamina). If I remember correctly they compared the Hamina data centre to their competitor’s located in Helsinki and hosted by a Canadian company, and noted that most probably the Hamina data centre is way more secure. Also, it was mentioned that they couldn’t move some of their production load to GCP yet, as the Finnish region’s data is routed through Sweden giving latencies too big for production use, but this would be fixed in the near future. Probably the best session, and I’m not just saying that because both of the presenters were Finns.

Venue and organization

Google Cloud Summit
When we arrived at the site there seemed to be an excessive amount of venue employees everywhere. After about two thousand more people arrived things changed: serendipity walked out of the room (and took all the liquids other than coffee with her) and you had to stand in line for everything. It wasn’t super bad, but things could have run smoother, and eating on the floor wasn’t especially thrilling. Stands were few in number, and I think the vendors on those stands got great value for their money visibility-wise (note to self: next time buy a spot).

In Summary

Like all cloud summits, this one worked as a good primer for your cloud quest on this particular platform. All the sessions were explained in such a way that even a complete newbie could grasp what was going on (even the term ‘microservice’ was explained on slides). Organisation wise I’ve seen better events, and also worse, this one falls into the category of ‘pretty good’. Because of the lack of learning experiences, this was most probably the last cloud event I will act as an observer, apart from grand events such as RE:Invent or Google Next. The big ones go wayyyy deeper into technicalities and are thus better suited for me. I might consider a local summit, but that would mean concentrating solely on socializing and collecting swag.
Google Cloud Summit

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

Coding In The Woods

This summer, the freshly founded Gofore Glub called Wilderness Glub  started its first trek. A group of 10 enthusiastic hikers set the destination to the Southern Konnevesi national park in Central Finland with a plan to make a hike during the day and stay at a lodge overnight. Apart from hiking and exploring the wilderness, there was another goal – to gather the first experiences of “coding in the woods”. The idea of writing code in the wilderness had been evolving for a while among a few colleagues, and now it was time to put it into practice.

At Gofore we like to encourage our team members to follow their passions so we created Gofore Clubs – or Glubs as we call them. These Glubs are supported by Gofore and we have many thriving Gofore Glubs ranging from cooking to coding, from money bags to mountain biking – and now we also have the Wilderness Glub.

So we loaded our backpacks with a small number of necessities, and a laptop preloaded with material to study the basics of the Elixir language and the trek was ready to begin.
Coding in the woods
The jolly group of hikers
After a walking for a few kilometres, it was time to take our first look at the Elixir language – its a type system and IEx REPL. Making a pleasant change from the office, we took a short break at a campfire site by a lake which gave me the chance to go through the introduction to the Elixir language and start running commands in the IEx. The rest of the group checked their gear and enjoyed a small snack. All preparation for coding proved solid as the necessary material was ready and following the list of instructions was easy. After this short introduction, it was time to continue the hike.
Following the trekking route through a swamp and some old pines, we made it to the top of the Kalajanvuori hill where we took the next pause. Giving vistas to the neighbouring valleys and forests it was a worthy place to stay for a while, and again it was time to pull out the laptop. Sitting on a large rock 60 meters above the lake the next Elixir topics to learn were the operators and the pattern matching. 25 minutes was enough time to gain some insight into the language features.
coding in the woods
Coding on the rock
Now it was time to set out on the way to finish the trek and leave the rocks and Elixir data structures behind. Being on foot in the wilderness of central Finland seemed to help to process the new information. There was time to think on the syntax details while picking the next foothold. We had completed the planned route and were ready to prepare supper in our camp. A member of our group managed to catch a pike from the lake which provided us with some fresh supplements.
In the evening we set off on the lake for a combined fishing trip and Elixir workout. We rowed around a few islands where the focus was moved from oars to Elixir streams. The wind had already steadied giving us good conditions for coding as the lake remained calm.
coding in a boat
Coding on the boat
With a completed Elixir topic and a few fish, it was time to return to the camp and call it a day.
Let’s summarize the day from the view of a software designer.
The pros were:

  • The air was fresh and calm, very nice to breathe – this is the best air conditioning you can get.
  • Plenty of sunlight and space around – eyes feeling relaxed after short breaks of staring into the distance. Also, our vitamin D supplies got fully loaded.
  • Encountering new out-of-the-box people – I got new hints on how to propel mosquitoes.
  • Ergonomy – you won’t get fixed to one position. The changing workstation and a plenty of movement are a sweet treat to office worker’s posture. No worry about having tense muscles at the end of the day.
  • Spotted a cuckoo – don’t remember seeing one before.

Followed by the cons:

  • Mosquitoes stinging hard and drawing your attention. With a laptop on your lap, you’re a sitting duck.
  • Bugs trying to crawl into the laptop – they must find the display and the heat attractive.
  • The battery and the Internet connection – you’re on your own, there’s no one to back you up. You need to plan wisely not to get blocked by an empty battery or a missing connection.
  • Weather – you can’t go out when it’s raining or your hardware gets ruined

So the sun did set and the day was through – what was there to learn from the coding trek? Clearly, coding in the woods requires a good amount of planning beforehand and the weather forecast needs to be followed carefully. On the other hand, the day was an intriguing experience. Breaking the normal routines felt refreshing and the actual method turned out to be valid for learning new programming skills. Fundamentally wilderness and coding appear as two very separate areas. Maybe that is not the whole truth – could this be the way of the 21st-century hunter-gatherer?
This field definitely requires more study, let’s see what the next trip reveals.

Avatar

Juha Lauttamus

Do you know a perfect match? Sharing is caring