Data exchange without borders

The latest X-Road Community Event was a huge success. With 150+ participants from 22 countries, it is evident that the interest and tangible actions for enabling digital societies are hot topics among the nations worldwide.
The event was organised by the Nordic Institute for Interoperability Solutions (NIIS) who are developing and managing an open source data exchange solution called X-Road. X-Road is the basis for data exchange in the public administration in Estonia and Finland, both of whom are founding members of the organisation. Lately, Iceland and the Faroe Islands have also joined as partners – and various countries and regions in Europe, Africa, the Americas and Asia have run trials and adapted X-Road for their use. See the X-Road world map for details:

X-Road development

Currently, Gofore is the sole developer of the X-Road core for NIIS through a public procurement.
X-Road version 6 is deployed in Finland and Estonia, and Iceland will follow suit shortly. The Faroe Islands and some other countries are preparing to migrate their platform from version 5 to 6.
At the event plans for the development of the next version of the software, X-Road 7 Unicorn, were introduced and presented in various workshops by experts from Gofore, the Finnish Population Register Centre (VRK) and the Estonian State Information System Authority (RIA). NIIS CTO Petteri Kivimäki stated: “X-Road is not developed for us [NIIS] but for you [nations and organisations]”, so it is evident that close collaboration between the development of the core and existing and planned local installations is highly valued. The MIT-licensed open source software enables maximum utilisation and all users are welcomed to contribute and create pull requests for additional required features.

Planning to utilise X-Road?

If your country or organisation has various data sources and siloed services, taking X-Road into use will enable a fluent, fully secured and easily manageable solution to exchange data between sources. Such fluent data exchange enables endless possibilities for derivative machine-to-machine applications and easy cross-border data exchange between countries. Of course the ultimate target are smooth human-centric services for citizens, which often require additional trusted digital identity management system to be build alongside information systems connected by X-Road.
Gofore has experience and deep expertise in all layers of X-Road and digital identity design, development and deployment and we are looking to support their utilisation at an international scale.

If you want to hear more, please contact the author or download the leaflet below – it will provide more detail on why, what and how X-Road would help to achieve a digital society in your context.

Interested in reliable, secure and easy to use integrations for digital services? X-road provides this and more. Download our X-road leaflet to learn more about how this could be utilised in your business: X-road leaflet

Harri Mansikkamäki

Harri Mansikkamäki

Harri Mansikkamäki is an international digital transformation expert. He drives for human-centric solutions that make a positive impact for citizens. For him technology is the enabler, not a master, and innovation is a stimulated process not an occasional “light bulb moment”.

Linkedin profile

Do you know a perfect match? Sharing is caring

Tampere, Finland, May 02, 2019 — Gofore, a Finnish digitalisation specialist with international growth plans, today announced that it has joined the Google Cloud Partner Program as a services partner giving Google Cloud customers the ability to benefit from Gofore’s cloud capabilities.
GCP
Google Cloud Platform is the 3rd big cloud provider Gofore has partnered with (AWS, Azure and now GCP). We’re proud to be GCP’s 8th partner company in Finland. This bolsters Gofore’s standing as a leading Cloud Platform consultancy company.
As a Google Cloud partner, Gofore offers customers consulting services no matter what their technology is, whether moving to a cloud system, or planning and building their own cloud infrastructure. In addition, we provide reliable service management on different platforms.
Key features of Gofore’s cloud offering include:

  • Agile Application Development with modern tools and technologies
  • Cloud consulting in all aspects of the cloud
  • User training, workshops

The Google Cloud Platform brings many benefits to our customers including:

  • A truly global private network
  • Great developer experience (When asked, most developers would prefer to use GCP for their new projects)
  • Google Cloud is the clear leader in the AI, ML and container market space
  • Saving costs utilizing the performance and scalability of the Cloud
  • A data centre in Finland for lower latencies for our clients in Finland

Whether you need help with Big Data and creating Data Pipelines, software development, lift and shift or creating cloud infrastructure, Gofore has your back. We have over 7 years of Cloud experience delivering complex customer projects and creating value for our customers.
Gofore is excited to join The Google Cloud Platform Partner Program. This partnership allows us to expand our proven expertise in the Cloud, bring the benefits of Google Cloud Platform and provide additional value to our customers.”, said Timur Kärki, CEO, Gofore.

Toni Lahtinen

Toni Lahtinen

Do you know a perfect match? Sharing is caring

Security in the cloud

Quite often I hear the claim “on-premise is more secure than cloud
Having worked in both the on-premise and cloud worlds for several years, this is an informed opportunity to dissect such claims into smaller subsets and do some comparisons.
Regarding cloud environments, I’ll stick with Amazon Web Services (AWS) which I am the most familiar with.

Physical security

Let’s start with physical security.
A properly configured server room must have the following topics covered:

  • Deny unauthorised access
  • Ways to prevent and detect tampering
  • Although not directly related to intrusion or unauthorised use, a fire alarm and fire suppression system must be present
  • All rack cases must be locked so that, for example, thumb drives cannot be inserted
  • Backups must reside in a remote location and must comply with the same security policy as the on-premise source

In cloud environments, the above-mentioned best practices are the responsibility of the service provider – if not, please change your provider – quickly!
With such best practices in place, a cloud customer doesn’t need to be concerned with the hardware aspects when designing a cloud-based system.

Software security

Regarding software security, the following topics must be covered:

  • Keep software up to date
  • Scan for vulnerabilities
  • Scan for misconfigurations
  • Security is layered

<shameless plug>If you missed my previous post, some of these topics were covered in greater detail here: https://gofore.com/computer-security-principles/ </shameless plug>
Another often heard claim is  “Data is so sensitive that it cannot reside in the cloud
Right, so why is that computer connected to the Internet?
Everything is crackable and the firewall in front of the computer is just a teaser in the game. If the data is that sensitive, then it must be in an encrypted format. You’ve got this covered, right? I hope so!
For these kinds of best practices, AWS offers great tools:

  • Encrypted S3 storage (object storage)
  • A Systems Manager Parameter Store to encrypt all values going into a database
  • Key Management Service to automate key handling, including key rotation and audit trail

(to name just a few examples)
If a virtual machine is being run, one should be aware that Spectre and similar hardware vulnerabilities will pose a danger to some extent; at least in the cloud where resources are shared.
An evil-minded attacker’s virtual machine instance will need to be located in the same host machine in which the victim’s instance is running.
These kinds of vulnerabilities are patched very swiftly as soon as the fix is available. Especially since it poses a danger to the core business. Therefore these attacks are short-lived – unless a new zero-day exploit is found. And even then, the zero-day exploit must be applicable and:

  • Moderately quick to exploit to have benefit
  • Success rate must be fairly high and it must give enough permissions to control the needed resources

An improvement would be to use cloud-native components to handle load balancing, container orchestration, message brokering and so on.
Why? Because those are constantly audited by the cloud provider, therefore resulting in a smaller attack surface compared to handling the whole operating system and its software components (and their updates).
Copying an insecure application into the cloud doesn’t make it magically safer.
Regarding security standards, AWS complies with the following letter and number bingos:

  • SOC 1/ISAE 3402, SOC 2, SOC 3
  • FISMA, DIACAP, and FedRAMP
  • PCI DSS Level 1
  • ISO 9001, ISO 13485, ISO 27001, ISO 27017, ISO 27018,

These standards fulfil the requirements for Nasdaq, the US Department of Defence, and Philips Healthcare, just to mention a few high profile customers. These organisations take security seriously and have a huge budget for their security teams.
In the AWS Aurora database is a Maria/PostgreSQL-compatible relational database service (RDS) that offers automatic scaling and updates.
Major version updates can be done this way too, though it’s against best practices to upgrade without testing. You have been warned! That diminishes the burden of updates drastically.
The biggest cloud providers, namely Amazon, Google and Microsoft, have some of the most talented people in the field working on their products to keep their customers’ data secure. Compare this to on-premise scenarios where, in the worst cases, it’s a one-man show. If (s)he is not really interested in security, then it’s a security nightmare waiting to be unleashed.
Nothing protects faulty configuration choices in the cloud either, though some things are harder to make globally reachable by default.
In conclusion, cloud is not a new kid on the block anymore.
Learn your environment and implement with best practices.
Correctly configured cloud is secure and might save the administrator/DevOps/whatever from nightless nights.

You can learn more about gaining cloud certifications in our blog series starting here: https://gofore.com/en/getting-certified-on-all-cloud-platforms-part-1-introduction/

Ville Valkonen

Ville Valkonen

Ville is a System Specialist at Gofore. By day he is an infrastructure automation guru; by night, he zips up his hoodie and codes away focusing on security holes.

Do you know a perfect match? Sharing is caring

Where GCP and AWS are like two peas in a pod, Azure is something different.  Azure has taken much of the things that can be found in more traditional Microsoft infrastructure and taken it to the cloud. Also, much thought has been given to the idea that cloud should work as an extension to that on-prem-Microsoft-infra. In recent times Azure has taken steps towards becoming more like its competitors; it has dropped pretty much every other container orchestration service other than Kubernetes and is slowly adopting Availability Zones (for example).
Use cases for Azure are mostly those projects that go heavy on Visual Studio, AzureAD or to be paired with existing on-prem-MS-infra. Out of the cloud offerings, Azure is my least favourite, maybe because I’m not too into the Microsoft ecosystem in the first place.

Certification

As with Google Cloud, Azure certifications are a living moving thing: they evolve over time and are less like snapshots of an era. In addition to this, they (rather painfully) retire courses and arrange their course palette pretty often. I did all my certs in Jan-Feb 2018 by doing 70-535, 70-533 and 70-473, acquiring MCSE, MCSA and MSP certificates. Writing this blog post I noticed that a) MCSE, MCSA and MSP are no longer a thing, but the certifications are now role based (like with AWS and GCP) and, b) all my exams (except for 70-473) have been retired.
All the new certifications can be found here https://www.microsoft.com/en-us/learning/browse-new-certification.aspx. You might find some resemblance with AWS certificates, as the naming convention is conveniently pretty much the same. One thing still differentiates Azure from AWS and GCP; you don’t always do a single exam to earn a certificate, sometimes you have to do more.  Also, the certs have prerequisites, which is something that AWS gave up in 2018. You can find all the exams here: https://www.microsoft.com/en-us/learning/exam-list.aspx
Here is a chart of certs and correlating exams:

Certificate
Exam(s)
Prerequisite
Azure Fundamentals AZ-900
Azure Administrator Associate AZ-100
AZ-101
AZ-102 (transition exam for people who passed Exam 70-533)
Azure Developer Associate AZ-202 (transition exam for people who passed Exam 70-532)
AZ-203
Azure DevOps Engineer Expert AZ-400 (in beta) Azure Administrator Associate or Azure Developer Associate
Azure Solutions Architect Expert AZ-300
AZ-301
AZ-302 (transition exam for people who passed Exam 70-535)
Azure Administrator Associate or Azure Developer Associate

The transition exams will retire in June 2019 so if you want to upgrade your certs, do so fast. I thought I would be done with certs for a while, but it’s possible that I’ll spend a day this spring studying the AZ-302 and trying the transition exam. I’m not especially thrilled by this change of plans, but I’d hate to waste all the time I spent doing Azure exams either.

Exam preparation

Even if the exams have changed, I think the training methods for Azure still stay the same. When I searched for viable online courses the pretty much only good source was Scott Duffy at Udemy and I know a few other people who have passed the Azure certs by following his advice. I’m quite confident that I saw Scott’s courses also on acloud.guru at some point, but can’t seem to find them there anymore.
You should also use Microsoft’s special offers when they are applicable. Usually, you get an exam, an exam retake and a practice exam with the price of only a single exam. As the exams are new though, first check https://www.mindhub.com for the availability of the practice exam; sometimes it takes time before the practice exams can be bought. The practice exams give you a good insight into the exam area and instead of just “right” and “wrong”, also guidance on where you can learn more about the subject. Model of the exam and a free retry also give some room for experimenting (or brute force) also taking off some pressure from the exams.

Taking the exam

Where Azure shines is that you can take the exam at your workplace or at home, given that you have a webcam and a mic on your machine. The exam situation is rather silly as they inspect a lot of stuff in the room and on your body, but you can schedule an exam for the same day and get done with it. The system failed me only once, so to learn from my mistake: if you do the system check early on (which I suggest if you have never done Azure exams this way before) then do the system check the same day as the exam. For me the exam software failed somewhat miserably as there was a new update that wouldn’t start, robbing me of a single exam try. As it was a software failure, I managed to get a new try for free, but I had to wait a week for the issue to be diagnosed, during which I couldn’t try the exam again.
Phew. This concludes the blog series on cloud certifications. The rather anti-climactic ending was not of my design, as my exam specific knowledge became obsolete in one giant whoop. If you are interested in cloud and already have some experience with it, check if some of our open positions would suit you at https://gofore.com/en/careers/

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 2: Amazon Web Services (AWS)
Part 3: Google Cloud Platform (GCP)

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

In this blog post, it becomes obvious that out of all the certification paths available I’ve chosen the one more related to the Ops-side of the DevOps-spectrum. I’ve been there when infrastructure couldn’t be considered a code when a server needed fitting into a rack*, and never written “real code” in my life apart from simple Perl/Python scripting. With cloud, I’ve continued to build upon that foundation. This reflects my certifications: I’ve done the Ops-path, but left the Dev-side totally untapped (this trend continues on the next [Azure] blog post). After doing my share of cloud certifications, I’ve dipped my toes into the realm of modern web programming. Does this mean that I should do the Dev certs next? Nope. The Pro level certs take a lot of time and I don’t see the investment of my free time paying back any time soon.
* Yes, cloud computing is just a cluster of computers that also needs fitting into a rack and petting, but it’s not really a mainstream job for a system administrator anymore, is it?
Ok, enough of my motivational circumstances. What about GCP then? I think out of all the clouds it’s the most user-friendly: the web console is just way better than AWS’s, the shell you get on the web console is nice and I would totally like to use App Engine on a project (with live debugging and other shiny thingies). Stackdriver as a whole still feels a little weird to me, but it has tons of functionality. And with the Google Kubernetes Engine being somewhat the best in the business, I would choose GCP whenever I’d have to run a container production load.

Certifications

I’ve done the Associate Cloud Engineer and Professional Cloud Architect certs. There are also Professional Data Engineer and Professional Cloud Developer certifications and even one on G-Suite, but I’ve got no experience in those. Unlike AWS and Azure, Google actually does it’s own web training.. and they are far better than any of the 3rd party ones. How great is that! They also send you some (mostly useless) swag when you complete any certificate, which is a nice gesture.
You can find all the exams here https://cloud.google.com/certification/
some Google SWAG
Picture1: The least worst swag I got from Google

Associate exam

I did the Associate Cloud Engineer beta exam as a sort of practice exam for the professional one (because why not?). It took a few months for the exam results to arrive (for beta exams it tends to be that way). I’d actually totally forgotten about the whole exam by then, and it turned out that that I was one of the first one hundred to get the certificate. The exam was the least theoretical cert exam I’ve ever completed. If I’d have to give any hints, it would be to get familiar with the web console and SDK Tools. Use them, preferably in a practice project. Yes, you have to know some commands. Yes, you have to know where stuff is in the web console. This is an exam for a Cloud Engineer, it measures how well you can do stuff.

Pro exam

Out of all the cert exams I’ve done, the Professional Cloud Architect was my favourite; hard, but interesting. It threw really-really-really odd curveballs at me that couldn’t be prepared for and learning trivia by heart had no value in this one. It measured if you knew your cloud and it means everything that comes with it. It was also the exam I spent the most time studying for. I think it took some three months at one to two hours/day for me to get comfortable with the whole exam area.

Study materials

For both the associate and pro I suggest the Google made Architecting with Google Cloud Platform Specialization. It might be a little too deep for the associate, but better too deep than too shallow.
Also for the pro, I would supplement the studying with the following:

Also as Google’s certs are a moving target (they get updated constantly) keep up with the news on their blog and I strongly suggest that you watch the Google Next speeches on relevant services from ’18

Doing the exam

At least in Finland, you can only do the exam as a proctored exam, where someone observes you doing the exam, and is only available in either Espoo or Helsinki. You can reserve your exam time in https://cloud.google.com/certification/.
Unlike AWS and Azure at the end of the exam you get information whether you passed or not, but no indication on how well you fared. No points, no percentages, nothing. I think this can be extremely frustrating for people who do not pass the exam, as they have no idea of whether they were even close. Just hope you’ll see the “you passed”-message and to get to order some swag for yourself.

Shameless marketing

I’m also head of GDG Cloud Tampere and we’ll be hosting many nice events this year. Join the fun at https://www.meetup.com/GDG-Cloud-Tampere/

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 2: Amazon Web Services (AWS)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

My journey to cloud environments started with AWS. First I staggered through the internet trying to find a good guide for understanding cloud computing, different platforms and terms used. I spent considerable time on this (retrospectively) useless wandering until I started studying for the Solutions Architect Associate certification and got my first bite of well-structured course material on my first ever cloud: AWS. Even if some people consider certifications silly and a waste of time, the certification courses themselves are a brilliant way of grasping how a cloud platform works.
My personal opinion on AWS is that it may not be the most user-friendly platform, but it’s still the most versatile one out there. If there is something that you can do in cloud, you can probably do it in AWS. This being the case, I would choose AWS as a running platform unless there is some reason not to.
AWS's chart of all the certificates
Picture1: AWS’s chart of all the certificates

Certifications

An up-to-date list of AWS’s certifications can be found here. No new associate or pro level certs have been added during the time I’ve been around the scene, but the existing exams have been slowly updated to match the AWS of 2019. The names of the certifications have stayed the same though. Unlike Azure and GCP where exams are kept up to date, AWS’s exams represent a snapshot of a given time. From the exam taker’s perspective this is a good thing, but from a practical implementation perspective, it’s a bad one. The exam-taker expects the study material to stay constant for years, and as such there are lots of exam material online to aid you. In practice, you end up studying old material and usually the newest Re:Invent stuff is not in the exams. The worst (or best?) example of this is the reserved instance classes (heavy/light utilization) that are obsolete and any official documentation can’t be found about them, but still, I’ve found questions on them on both Solutions Architect Associate & Professional exams. Both of these exams have now been updated, and I doubt there are any reserved instance class related questions, but in a few years, there will be something similar.
Certifications used to be valid for two years, with one year grace period with the option of doing a simple re-certification, even though your cert has expired. Because that system was rather confusing the new exams are now just valid for three years during which they can be extended by doing the re-certification exam. There used to be a requirement on passing a specific associate exam to even have the possibility to try a Pro cert exam, but this restriction was removed late 2018. This doesn’t mean that you should go straight for Pro certifications unless you have worked with the platform for years using a plethora of services.

Associate certifications


Picture2: Associate exams and shared material
Associate certs share around 70% of the same base of “this is AWS”-material with each other concerning networking, IAM, storage etc. If you do the Solutions Architect Associate certification first (which I recommend) you can do the Developer and SysOps courses with few days of prepping. Should you choose this method? Well, it looks better on your CV but really it brings little extra to the table. The Developer certification material more thoroughly covers parts of the developer centred material such as DynamoDB and SysOps and has some more details on OpsWorks and Elastic Beanstalk. You really only have to study the difference between your first associate exam and the new one you’ll be doing. If you got a sponsor for your exam fees and you want to boost your CV, go for it.

Professional certifications

The pro level exams cover some common ground with each other, both being AWS exams, but they share fewer details when compared to the Associate exams. For me, It took a few months to study for both exams individually. I initially started reading for the DevOps Pro right after I got my associates done, but it was too steep a hill for me to climb, and I ran out of motivation around halfway through the course materials. One year later with actual AWS projects under my belt, I read through the materials which now felt easy and passed the exam quite easily. I tried studying for the Architect Pro after that, but hit that familiar wall once again, fast forwarding to 1 year of AWS projects and told my colleague how “this course material brings very little new to me” and passed the exam.
For professional certifications I have only one piece of advice:
Do. Actual. Projects. On. Cloud.
After that they are easy.
I think that owners of Pro level certifications are somewhat respected if such a term should appear in your CV but once again I don’t really think there is much difference if you have one or two.

Speciality certifications

Unlike general ones, the specialities share very little with each other, only concentrating on one thing and going deep into it. I have to admit that I haven’t done any of the special certs, only skimmed through their content. I intended to do the Advanced Networking certification, but gave up around half-way through the course material when it was going through BGP’s finest details. As AWS certificates go, they are quite new with new ones coming every now and then, so I don’t know how much reputation you get by passing them.

Study Materials

I totally and wholeheartedly suggest that you use www.acloud.guru for studying. The guys and gals there are doing a fabulous job on online courses. acloud.guru has a practice exam (usually) at the end of their course, which is quite sufficient. You can also buy a practice exam from AWS with some 20 questions, but it’s usually badly written and even if you know your stuff (and pass the actual exam) you might end up with just 60% of the questions correct. If you are feeling cheap and you are gonna do only one certificate, you can grab a acloud.guru course dirt cheap from www.udemy.com; they have a sale going on every day.
In addition to acloud.guru I complimented the materials with those on www.linuxacademy.com on Pro Solutions Architect course, as at that time there were no practice end exam options on acloud.guru and I felt that some of the services were not explained in enough detail.
I read every whitepaper that is suggested in the courses, and I also read FAQs and documentation on the most important services. As noted on the first blog post on this series do the following:

  1. Read the certification requirements
  2. Take part in a web course that goes through the relevant material
  3. Read the documentation for the most important services
  4. Do some practice exams
  5. Ace the exam

What I did forget to mention though, is using the service. For every service on the exam, you should use the actual service. If you got a pet project to use them on, great. If you don’t, just click through the dialogues so that you understand every option and how it influences the end result. For Pro certs you also have to do some work with cloud computing, otherwise, the wall is too high for you to climb, sorry.

Doing the exam

You can reserve your exam time on www.aws.training. If English is not your first language, remember to mark so on the portal BEFORE reserving the exam; this gives you some extra time. You can do this on the AWS Training and Certification portal by clicking “Upcoming events”→”Request Exam Accommodations”→ “Request Accommodation” → “ESL +30 MINUTES”.
The options are basically to do a monitored exam where a person watches how you are faring, or use a kiosk. In Finland, the observed options are located in Helsinki and kiosks can be found in Helsinki and Tampere. There has been a lot of conversation about the kiosk PCs booting in the middle of the exams, possibly multiple times, and how they are monitoring if you cover your mouth, thus creating more stress. Personally, I liked the kiosk experience, as I could do the exam on the other side of the road from our office in Tampere. Yes, the passport recognition mechanism was broken, as told by the receptionist, and the person on the other end of the line wouldn’t or couldn’t understand that, requiring me to start exam registration a few times over, but the exam itself went quite smoothly.
Right after the exam, it tells you how did you fare and in a few minutes you get the results also to your email, with percentage grading on different areas of the exam.

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 3: Google Cloud Platform (GCP)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

This blog post is the first of my new blog post series that will be published in the following weeks. The aim is to cover getting certificated on all the major cloud platforms currently (1/2019) in Europe: Amazon Webservices (AWS), Google Cloud Platform (GCP) and Microsoft Azure. I’ve done my Pro level certs on all these platforms quite recently (within a year) so I’ve some knowledge on the issue. I’ve written texts similar to this on our internal wiki, but as there are no secrets in there, I decided to re-write the material in more reader friendly format (and less like a stream of consciousness).
Like the good authors of certification guides I’m not claiming that you will get certified by doing what I advice you to, but I can safely say that following my advice raises your probability of success.
This first blog post is labelled ‘introduction’. I will cover general stuff about certifications, suggested path for going through the clouds (for the hardcore “gotta get ’em all”-cloud people out there) and also some general notes on preparing for the certifications. The later posts will each focus on one of the cloud platforms.
The three musketeers

Correlation between getting a certification and knowing your stuff

Before going to the actual how-to part, let’s think for a while what a certificate actually is and does getting one hold any significance. Being a holder of a certificate means that you have passed an exam where your general knowledge on the platform has been tested. Depending on your path (development, architecture etc.) your knowledge goes a little bit deeper on certain areas, but you most likely need to know the same basic stuff on all associate-level certs for a single platform. For pro level exams, it means that you also possess some deeper level of knowledge on the subject and also possess problem-solving skills giving you the title Pro; a professional proficiency on the subject. This is not to be mistaken with a guru. Does a certificate make you a cloud engineer, to be quickly hired and put to a challenging customer project? Pro level cert certainly would imply that, but associate? No. An associate cert is a first stepping stone, meaning that you know some rules and best practices on the subject, but without any elbow grease on the platform, it amounts only to a good start. Of course, you can just put your study-cap on and study like possessed and pass a pro exam without never even launching a single instance, but I dare to say that it’s quite an uncommon scenario.
Why bother with associate certifications then? Well, as said previously, it indicates that you know the best practices of the platform, and while that might not land you your dream job, it’s still a quite big deal. When working in a cloud environment it’s very easy to deploy applications and create virtual machines, but it’s also really easy to do them wrong, using architecture not fit for cloud-age or in a worst case compromising security. Yes, you could just watch the videos and read the documents and be equally knowledgeable on the subject as someone with a certificate, but if you took all that time to study, why not do the certification when you are on it.

Clouds and order of conquest

If your work or side projects do not involve using any of the platforms and you are totally free to choose where to begin, I would (once again) pick AWS. AWS is by far the market leader and mastering it still opens more doors than GCP and Azure combined. If you are more curious for example about GCP, pick that. In studying practicality falls second to motivation.
If you really have a lot of free time on your hands and want to get certified on all the cloud platforms you can start wherever you want… but if you start with either AWS or GCP, do the remaining one before going for Azure. Terminology- and function-wise AWS and GCP are quite similar to the extent that Google has published even a quite handy cross-reference document from AWS experts to grasp their platform. Where terms for higher levels of abstraction are also similar for Azure, such as block storage and object storage the Microsoft way of doing cloud is still quite different. Understanding Azure requires you to forget how stuff is done in other cloud platforms and learn the Azure way. I did the AWS →  Azure → GCP trip and cannot recommend it to anyone.
Why use one sentence to describe the cloud platform study order, when you can confuse the ***t out of people with a diagram

Actual studying

An easyish path for studying for a certification goes like this

  1. Read the certification requirements
  2. Do some web course that goes through the relevant material
  3. Read the documentation for most important services
  4. Do some practice exams
  5. Ace the exam

Certification requirements and service documentation will be produced by the cloud platform organization and readable on their websites. Web courses and practice exams are usually provided by some third party, I will give hints on good places for platform-specific blogs of this series.
If you spend one hour daily, you should be able to do your first cloud certification in two months, even without previous experience. I suggest that before trying any of the Pro level certifications you get hands-on experience with some cloud platform for at least one year. It does not have to be that specific platform, as usually on those exams emphasis is more on “cloud thinking” and less on trivia.

Exam tactics

Even if there are differences in how the exams are done on different platforms, there are some universal strategies:

  1. Book your exam when you start studying
    • It works as a goal for your studies, giving that small ‘oomph’ to your motivation
  2. In the exam, don’t get stuck, time is of the essence
    • Mark the hard ones and come back later
  3. Don’t overstress
    • Even if you fail, you can always try again. You’ll also benefit from failure: now you know your weak points and can improve on that

Follow-up posts in this series:

Part 2: Amazon Web Services (AWS)
Part 3: Google Cloud Platform (GCP)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 5

Today was the last day of the conference and it’s starting to show. People are heading home, so sessions are not that crowded and last session ends around noon / 1pm. People wearing conference badges are thinning out and replaced with more regular tourists.
I managed to get into a very good chalk talk about Cloudformation given by Check Meyer and Ryan Lohan. So a big thanks to them! We had a good discussion about tooling, feature requests and so on. This is also something that many people might overlook. AWS prioritizes features and their implementation on the basis of feedback received from customers. You do not have to be APN partner, done certifications, or anything like that. As Amazon/AWS themselves say, they try to be the most customer-centric company there is. The most important thing is that instead of silently contemplating on a feature or bug you should make yourself heard. Join AWS Developers slack. Use Twitter, AWS forums, email their evangelists or talk to their employees at any event. If the barrier is still too big you can email me or my colleagues and we can bring your case to AWS. Make yourself heard!
Re:Invent 2018

Finally, some tips & tricks in case you find yourself in Re:Invent 2019!

First, don’t be too greedy. There are tons of good sessions but the thing is – the sessions are recorded and can be watched at a later time Youtube. Chalk talks, workshops and hackathons are not. You get to talk to product-teams or their representatives in those. I can highly recommend attending those and if there are competing sessions at the same time try to prioritize chalk-talks and workshops higher than breakout sessions.
Second, as I’ve mentioned in the first post, Las Vegas is designed to remove your money from you. There will be coffee/soda/water provided by Re:Invent as well as some snacks. The expo area is excellent if you want to eat something. There is usually food being served. Hotels are expensive and if you need to buy something there are multiple Walgreens (shops) on the Strip.
Third, keynotes by Andy Jassy and Werner Vogels are great. However, you should consider passing the keynotes if there are some other interesting happening at the same time. For example hackathons or gamedays. Keynotes are usually recorded and any announcements made are also published on Twitter, blogs and so on.
Fourth, when booking your schedule try to cluster up the sessions/workshops/etc you are attending. Moving from one venue to another takes time. Cluster your sessions on certain venues. For example, The Mirage and Venetian are very close to each other. Moving between them is much easier than moving from Mira/Venetian to Aria/Vdara. On the other hand, Aria/Vdara/MGM are situated relatively close to each other.
Fifth, pick your parties. There are TONS of different parties hosted by AWS partners. You cannot visit all so choose early.
Sixth, talk to people. That might not be the easiest thing to do especially for us Finnish whose culture is not extroverted. “Hey, my name is Aki. What do you with AWS?” and the conversation takes on.
Now it’s time to sign-off and get some rest before starting the long trip back home. Quoting Werner now it’s time to “Go build”.
Re:Invent
Jeff Bar, Abby Fuller and Simon Elisha before Twitch live

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 4

Today was Werner’s turn and boy he didn’t disappoint. The keynote was packed with some very welcome announcements. Again some of the announcements might be missing from the post but those can be found on Twitter, AWS blogs and from the news.
Re:Invent
As usual, Werner used a good portion of the keynote to emphasize how critical it is to have control over ones’ infrastructure. To avoid “black boxes” and to prepare for the fact that “everything fails all the time”. By now this shouldn’t be a surprise for anyone and if your architecture is not taking this into account you might be in for some nasty surprises in the future. In order to help companies assume best practices, AWS has a so-called “Well Architected Framework”. This set of guidelines and best practices should be familiar to anyone who is using AWS. Those of you who have done the AWS certifications it is the foundation to learn. Now AWS has come up with “Well Architected” self-service testing tool. It can be used to assess how well your development, operational and governance practices are aligned with the “Well Architected Framework”.
However the announcements today were mostly about serverless computing, namely AWS Lambda. There were some huge updates announced like custom runtimes, layers, ALB support, service integrations with step functions and IDE toolkits. The abstraction level keeps on raising and serverless-computing is becoming more and more mainstream.
Re:Invent
To easily sum up all the announcements it is now possible to have your lambda-functions be called by ALB while lambdas themselves can be running Rust, C++ or even Cobol and code can be shared between your functions. Your step functions can interact with other services and you can debug your lambda functions. Additionally API gateway now supports websockets. Streaming data has also become mainstream (pun intended) and even though AWS has Kinesis they announced managed Kafka. Running Kafka at scale is no trivial task so this should be a relief for anyone using Kafka but not wanting to handle the maintenance it requires.
Building systems without any servers at all is now much more feasible and serverless should nowadays be given very careful consideration when starting new projects. It could be said that serverless is a valid option for new development activities and instead of prejudice it should be embraced since serverless/Functions as a Service has come to stay.

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 3

Re:Invent
Today was a big day. Wednesday morning is usually the time for Andy Jassy (CEO of AWS) to give his keynote. This was the case this year also. The keynote was full of different announcements and it will be quite a task to go through all of them. I’ll leave some of them out and also include some announcements that weren’t in the keynote.

AI&ML

A huge chunk of the talk was about ML. Like Google has their TPU-processors to run ML models, AWS today announced Inferentia processors which should be available next year. Google has a head-start of several years so it is interesting to see how AWS’s offering can match Google’s. In addition to processors there were all kinds of enhancements so if ML is your thing you should definitely read the AWS blog posts about the new features. One thing I’m going to “kehdata” (Sorry English speakers, ‘daring’ is a rough translation of the term, but in Gofore it holds much more meaning. Email me and I’ll explain the concept) is AWS DeepRacer. DeepRacer is radio-controlled car with atom-processor, video-camera, ROS OS and so on. Would definitely be fun way for people to practice ML and reinforcement learning.

DynamoDB on-demand

Traditionally DynamoDB tables must have had both read capacity and write capacity defined and performance was pretty static in a sense (assuming your data is modelled correctly and you know your access patterns). Then came autoscaling which automatically tunes read/write capacity values based on your traffic. And we have the option for on-demand billing. Based on the blog posts and documentation the on-demand option scales very well right from the start without the need to specify read/write capacity. The cost model is interesting and more closely matches for example Lambda’s model where you only pay for what you use. If your DynamoDB usage is spiky then on-demand might be a very good fit, whereas continuous, huge volume of traffic is much more cost-effective to run on traditional mode where you specify the performance limits yourself.

Re:Invent

AWS Control Tower

For several years the best practice has been to distribute applications/services/teams into different AWS accounts and furthermore segregate development, testing and production into different accounts. Natural outcome from this is the fact that the number of AWS accounts in organizations has exploded. So far it has been pretty much DIY-solutions when trying to get overall vision of all your accounts. The bigger the organization, the more they feel pain from this.
Today AWS announced Control Tower which aims to alleviate some of these problems. Automating the set-up of a baseline environment, Control Tower uses existing services like Config, Cloudtrail, Lambda, Cloudwatch, etc. Read more about Control Tower from product page: https://aws.amazon.com/controltower/features/
As an AWS partner our company has a huge number of accounts, so for us Control Tower is a very welcome improvement. We are investigating what it exactly brings to table and where you might still need custom solutions. Stay tuned for more blog posts concentrating solely on Control Tower. Currently it is in preview, so signup and a bit of luck is needed to get early taste of it.

Amazon Timestream

Cloudwatch metrics isn’t exactly new. It has existed a long time and is de-facto solution for metrics collection from AWS services. In addition to Cloudwatch it is very common to see InfluxDB or Prometheus on our clients (usually combined with Grafana for visualization of time-series data).
Today AWS announced Amazon Timestream, a managed time-series database. Targeted solely for time-series data this puts Timestream into direct competition against Prometheus, InfluxDB or Riak TS or Timescale. Naturally this is excellent news if you don’t want to manage servers and want to have your time-series database as a service. No more EC2 instances running Prometheus, no more DIY solutions for HA and so on. AWS mantra has long been that let the “undifferentiated heavy lifting” for them and concentrate on your application and business-logic. Timestream follows this idiom perfectly. Timestream is currently in preview so signup and a bit of luck is needed to test it.

Quantum ledger database

Quantum ledger database and managed blockchain. Well now we have all the buzzwords in one blog. AI/ML handled already and now it is time for blockchain. AWS announced to day two services loosely related to each other, both are currently in preview. Quantum ledger database is database with central trusted authority, immutable append only semantics with the complete history of all the changes ever made. What does it have to do with blockchain? Well, all the changes are chained and cryptographically verified. There is huge amount of use cases! In addition to quantum ledger database AWS also announced managed blockchain which supports Hyperledger Fabric and Ethereum (Hyperledger is first, Ethereum coming later).

CodeDeploy

There were other new features launched that might stay under the radar if the focus is only on the keynote. One that is very relevant for my current project is the CodeDeploy’s ability to do native blue/green deployments into ECS and Fargate. (more here: https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/)
This will definitely be tested out next week.

AWS App Mesh

Also one more nice announcement was AWS App Mesh. Envoy-proxy based service-mesh  for EKS, ECS and K8s running on EC2. Like other service meshes the idea is that applications or micro-services do not need to have in-built functionality for service discovery (and possible load-balancing or circuit breaking). Service mesh takes care of it and applications are simpler to implement. App Mesh is in preview but more information can be found on Github: https://github.com/awslabs/aws-app-mesh-examples
Like I said this is not definite list of all the new changes. There are literally tons of new things! Let’s see if Andy left any announcements for Werner tomorrow (hopefully so).
Re:Invent

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring