Where GCP and AWS are like two peas in a pod, Azure is something different.  Azure has taken much of the things that can be found in more traditional Microsoft infrastructure and taken it to the cloud. Also, much thought has been given to the idea that cloud should work as an extension to that on-prem-Microsoft-infra. In recent times Azure has taken steps towards becoming more like its competitors; it has dropped pretty much every other container orchestration service other than Kubernetes and is slowly adopting Availability Zones (for example).
Use cases for Azure are mostly those projects that go heavy on Visual Studio, AzureAD or to be paired with existing on-prem-MS-infra. Out of the cloud offerings, Azure is my least favourite, maybe because I’m not too into the Microsoft ecosystem in the first place.

Certification

As with Google Cloud, Azure certifications are a living moving thing: they evolve over time and are less like snapshots of an era. In addition to this, they (rather painfully) retire courses and arrange their course palette pretty often. I did all my certs in Jan-Feb 2018 by doing 70-535, 70-533 and 70-473, acquiring MCSE, MCSA and MSP certificates. Writing this blog post I noticed that a) MCSE, MCSA and MSP are no longer a thing, but the certifications are now role based (like with AWS and GCP) and, b) all my exams (except for 70-473) have been retired.
All the new certifications can be found here https://www.microsoft.com/en-us/learning/browse-new-certification.aspx. You might find some resemblance with AWS certificates, as the naming convention is conveniently pretty much the same. One thing still differentiates Azure from AWS and GCP; you don’t always do a single exam to earn a certificate, sometimes you have to do more.  Also, the certs have prerequisites, which is something that AWS gave up in 2018. You can find all the exams here: https://www.microsoft.com/en-us/learning/exam-list.aspx
Here is a chart of certs and correlating exams:

Certificate
Exam(s)
Prerequisite
Azure Fundamentals AZ-900
Azure Administrator Associate AZ-100
AZ-101
AZ-102 (transition exam for people who passed Exam 70-533)
Azure Developer Associate AZ-202 (transition exam for people who passed Exam 70-532)
AZ-203
Azure DevOps Engineer Expert AZ-400 (in beta) Azure Administrator Associate or Azure Developer Associate
Azure Solutions Architect Expert AZ-300
AZ-301
AZ-302 (transition exam for people who passed Exam 70-535)
Azure Administrator Associate or Azure Developer Associate

The transition exams will retire in June 2019 so if you want to upgrade your certs, do so fast. I thought I would be done with certs for a while, but it’s possible that I’ll spend a day this spring studying the AZ-302 and trying the transition exam. I’m not especially thrilled by this change of plans, but I’d hate to waste all the time I spent doing Azure exams either.

Exam preparation

Even if the exams have changed, I think the training methods for Azure still stay the same. When I searched for viable online courses the pretty much only good source was Scott Duffy at Udemy and I know a few other people who have passed the Azure certs by following his advice. I’m quite confident that I saw Scott’s courses also on acloud.guru at some point, but can’t seem to find them there anymore.
You should also use Microsoft’s special offers when they are applicable. Usually, you get an exam, an exam retake and a practice exam with the price of only a single exam. As the exams are new though, first check https://www.mindhub.com for the availability of the practice exam; sometimes it takes time before the practice exams can be bought. The practice exams give you a good insight into the exam area and instead of just “right” and “wrong”, also guidance on where you can learn more about the subject. Model of the exam and a free retry also give some room for experimenting (or brute force) also taking off some pressure from the exams.

Taking the exam

Where Azure shines is that you can take the exam at your workplace or at home, given that you have a webcam and a mic on your machine. The exam situation is rather silly as they inspect a lot of stuff in the room and on your body, but you can schedule an exam for the same day and get done with it. The system failed me only once, so to learn from my mistake: if you do the system check early on (which I suggest if you have never done Azure exams this way before) then do the system check the same day as the exam. For me the exam software failed somewhat miserably as there was a new update that wouldn’t start, robbing me of a single exam try. As it was a software failure, I managed to get a new try for free, but I had to wait a week for the issue to be diagnosed, during which I couldn’t try the exam again.
Phew. This concludes the blog series on cloud certifications. The rather anti-climactic ending was not of my design, as my exam specific knowledge became obsolete in one giant whoop. If you are interested in cloud and already have some experience with it, check if some of our open positions would suit you at https://gofore.com/en/careers/

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 2: Amazon Web Services (AWS)
Part 3: Google Cloud Platform (GCP)

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

In this blog post, it becomes obvious that out of all the certification paths available I’ve chosen the one more related to the Ops-side of the DevOps-spectrum. I’ve been there when infrastructure couldn’t be considered a code when a server needed fitting into a rack*, and never written “real code” in my life apart from simple Perl/Python scripting. With cloud, I’ve continued to build upon that foundation. This reflects my certifications: I’ve done the Ops-path, but left the Dev-side totally untapped (this trend continues on the next [Azure] blog post). After doing my share of cloud certifications, I’ve dipped my toes into the realm of modern web programming. Does this mean that I should do the Dev certs next? Nope. The Pro level certs take a lot of time and I don’t see the investment of my free time paying back any time soon.
* Yes, cloud computing is just a cluster of computers that also needs fitting into a rack and petting, but it’s not really a mainstream job for a system administrator anymore, is it?
Ok, enough of my motivational circumstances. What about GCP then? I think out of all the clouds it’s the most user-friendly: the web console is just way better than AWS’s, the shell you get on the web console is nice and I would totally like to use App Engine on a project (with live debugging and other shiny thingies). Stackdriver as a whole still feels a little weird to me, but it has tons of functionality. And with the Google Kubernetes Engine being somewhat the best in the business, I would choose GCP whenever I’d have to run a container production load.

Certifications

I’ve done the Associate Cloud Engineer and Professional Cloud Architect certs. There are also Professional Data Engineer and Professional Cloud Developer certifications and even one on G-Suite, but I’ve got no experience in those. Unlike AWS and Azure, Google actually does it’s own web training.. and they are far better than any of the 3rd party ones. How great is that! They also send you some (mostly useless) swag when you complete any certificate, which is a nice gesture.
You can find all the exams here https://cloud.google.com/certification/
some Google SWAG
Picture1: The least worst swag I got from Google

Associate exam

I did the Associate Cloud Engineer beta exam as a sort of practice exam for the professional one (because why not?). It took a few months for the exam results to arrive (for beta exams it tends to be that way). I’d actually totally forgotten about the whole exam by then, and it turned out that that I was one of the first one hundred to get the certificate. The exam was the least theoretical cert exam I’ve ever completed. If I’d have to give any hints, it would be to get familiar with the web console and SDK Tools. Use them, preferably in a practice project. Yes, you have to know some commands. Yes, you have to know where stuff is in the web console. This is an exam for a Cloud Engineer, it measures how well you can do stuff.

Pro exam

Out of all the cert exams I’ve done, the Professional Cloud Architect was my favourite; hard, but interesting. It threw really-really-really odd curveballs at me that couldn’t be prepared for and learning trivia by heart had no value in this one. It measured if you knew your cloud and it means everything that comes with it. It was also the exam I spent the most time studying for. I think it took some three months at one to two hours/day for me to get comfortable with the whole exam area.

Study materials

For both the associate and pro I suggest the Google made Architecting with Google Cloud Platform Specialization. It might be a little too deep for the associate, but better too deep than too shallow.
Also for the pro, I would supplement the studying with the following:

Also as Google’s certs are a moving target (they get updated constantly) keep up with the news on their blog and I strongly suggest that you watch the Google Next speeches on relevant services from ’18

Doing the exam

At least in Finland, you can only do the exam as a proctored exam, where someone observes you doing the exam, and is only available in either Espoo or Helsinki. You can reserve your exam time in https://cloud.google.com/certification/.
Unlike AWS and Azure at the end of the exam you get information whether you passed or not, but no indication on how well you fared. No points, no percentages, nothing. I think this can be extremely frustrating for people who do not pass the exam, as they have no idea of whether they were even close. Just hope you’ll see the “you passed”-message and to get to order some swag for yourself.

Shameless marketing

I’m also head of GDG Cloud Tampere and we’ll be hosting many nice events this year. Join the fun at https://www.meetup.com/GDG-Cloud-Tampere/

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 2: Amazon Web Services (AWS)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

My journey to cloud environments started with AWS. First I staggered through the internet trying to find a good guide for understanding cloud computing, different platforms and terms used. I spent considerable time on this (retrospectively) useless wandering until I started studying for the Solutions Architect Associate certification and got my first bite of well-structured course material on my first ever cloud: AWS. Even if some people consider certifications silly and a waste of time, the certification courses themselves are a brilliant way of grasping how a cloud platform works.
My personal opinion on AWS is that it may not be the most user-friendly platform, but it’s still the most versatile one out there. If there is something that you can do in cloud, you can probably do it in AWS. This being the case, I would choose AWS as a running platform unless there is some reason not to.
AWS's chart of all the certificates
Picture1: AWS’s chart of all the certificates

Certifications

An up-to-date list of AWS’s certifications can be found here. No new associate or pro level certs have been added during the time I’ve been around the scene, but the existing exams have been slowly updated to match the AWS of 2019. The names of the certifications have stayed the same though. Unlike Azure and GCP where exams are kept up to date, AWS’s exams represent a snapshot of a given time. From the exam taker’s perspective this is a good thing, but from a practical implementation perspective, it’s a bad one. The exam-taker expects the study material to stay constant for years, and as such there are lots of exam material online to aid you. In practice, you end up studying old material and usually the newest Re:Invent stuff is not in the exams. The worst (or best?) example of this is the reserved instance classes (heavy/light utilization) that are obsolete and any official documentation can’t be found about them, but still, I’ve found questions on them on both Solutions Architect Associate & Professional exams. Both of these exams have now been updated, and I doubt there are any reserved instance class related questions, but in a few years, there will be something similar.
Certifications used to be valid for two years, with one year grace period with the option of doing a simple re-certification, even though your cert has expired. Because that system was rather confusing the new exams are now just valid for three years during which they can be extended by doing the re-certification exam. There used to be a requirement on passing a specific associate exam to even have the possibility to try a Pro cert exam, but this restriction was removed late 2018. This doesn’t mean that you should go straight for Pro certifications unless you have worked with the platform for years using a plethora of services.

Associate certifications


Picture2: Associate exams and shared material
Associate certs share around 70% of the same base of “this is AWS”-material with each other concerning networking, IAM, storage etc. If you do the Solutions Architect Associate certification first (which I recommend) you can do the Developer and SysOps courses with few days of prepping. Should you choose this method? Well, it looks better on your CV but really it brings little extra to the table. The Developer certification material more thoroughly covers parts of the developer centred material such as DynamoDB and SysOps and has some more details on OpsWorks and Elastic Beanstalk. You really only have to study the difference between your first associate exam and the new one you’ll be doing. If you got a sponsor for your exam fees and you want to boost your CV, go for it.

Professional certifications

The pro level exams cover some common ground with each other, both being AWS exams, but they share fewer details when compared to the Associate exams. For me, It took a few months to study for both exams individually. I initially started reading for the DevOps Pro right after I got my associates done, but it was too steep a hill for me to climb, and I ran out of motivation around halfway through the course materials. One year later with actual AWS projects under my belt, I read through the materials which now felt easy and passed the exam quite easily. I tried studying for the Architect Pro after that, but hit that familiar wall once again, fast forwarding to 1 year of AWS projects and told my colleague how “this course material brings very little new to me” and passed the exam.
For professional certifications I have only one piece of advice:
Do. Actual. Projects. On. Cloud.
After that they are easy.
I think that owners of Pro level certifications are somewhat respected if such a term should appear in your CV but once again I don’t really think there is much difference if you have one or two.

Speciality certifications

Unlike general ones, the specialities share very little with each other, only concentrating on one thing and going deep into it. I have to admit that I haven’t done any of the special certs, only skimmed through their content. I intended to do the Advanced Networking certification, but gave up around half-way through the course material when it was going through BGP’s finest details. As AWS certificates go, they are quite new with new ones coming every now and then, so I don’t know how much reputation you get by passing them.

Study Materials

I totally and wholeheartedly suggest that you use www.acloud.guru for studying. The guys and gals there are doing a fabulous job on online courses. acloud.guru has a practice exam (usually) at the end of their course, which is quite sufficient. You can also buy a practice exam from AWS with some 20 questions, but it’s usually badly written and even if you know your stuff (and pass the actual exam) you might end up with just 60% of the questions correct. If you are feeling cheap and you are gonna do only one certificate, you can grab a acloud.guru course dirt cheap from www.udemy.com; they have a sale going on every day.
In addition to acloud.guru I complimented the materials with those on www.linuxacademy.com on Pro Solutions Architect course, as at that time there were no practice end exam options on acloud.guru and I felt that some of the services were not explained in enough detail.
I read every whitepaper that is suggested in the courses, and I also read FAQs and documentation on the most important services. As noted on the first blog post on this series do the following:

  1. Read the certification requirements
  2. Take part in a web course that goes through the relevant material
  3. Read the documentation for the most important services
  4. Do some practice exams
  5. Ace the exam

What I did forget to mention though, is using the service. For every service on the exam, you should use the actual service. If you got a pet project to use them on, great. If you don’t, just click through the dialogues so that you understand every option and how it influences the end result. For Pro certs you also have to do some work with cloud computing, otherwise, the wall is too high for you to climb, sorry.

Doing the exam

You can reserve your exam time on www.aws.training. If English is not your first language, remember to mark so on the portal BEFORE reserving the exam; this gives you some extra time. You can do this on the AWS Training and Certification portal by clicking “Upcoming events”→”Request Exam Accommodations”→ “Request Accommodation” → “ESL +30 MINUTES”.
The options are basically to do a monitored exam where a person watches how you are faring, or use a kiosk. In Finland, the observed options are located in Helsinki and kiosks can be found in Helsinki and Tampere. There has been a lot of conversation about the kiosk PCs booting in the middle of the exams, possibly multiple times, and how they are monitoring if you cover your mouth, thus creating more stress. Personally, I liked the kiosk experience, as I could do the exam on the other side of the road from our office in Tampere. Yes, the passport recognition mechanism was broken, as told by the receptionist, and the person on the other end of the line wouldn’t or couldn’t understand that, requiring me to start exam registration a few times over, but the exam itself went quite smoothly.
Right after the exam, it tells you how did you fare and in a few minutes you get the results also to your email, with percentage grading on different areas of the exam.

Other posts in this series:

Part 1: Introduction to cloud certifications
Part 3: Google Cloud Platform (GCP)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

This blog post is the first of my new blog post series that will be published in the following weeks. The aim is to cover getting certificated on all the major cloud platforms currently (1/2019) in Europe: Amazon Webservices (AWS), Google Cloud Platform (GCP) and Microsoft Azure. I’ve done my Pro level certs on all these platforms quite recently (within a year) so I’ve some knowledge on the issue. I’ve written texts similar to this on our internal wiki, but as there are no secrets in there, I decided to re-write the material in more reader friendly format (and less like a stream of consciousness).
Like the good authors of certification guides I’m not claiming that you will get certified by doing what I advice you to, but I can safely say that following my advice raises your probability of success.
This first blog post is labelled ‘introduction’. I will cover general stuff about certifications, suggested path for going through the clouds (for the hardcore “gotta get ’em all”-cloud people out there) and also some general notes on preparing for the certifications. The later posts will each focus on one of the cloud platforms.
The three musketeers

Correlation between getting a certification and knowing your stuff

Before going to the actual how-to part, let’s think for a while what a certificate actually is and does getting one hold any significance. Being a holder of a certificate means that you have passed an exam where your general knowledge on the platform has been tested. Depending on your path (development, architecture etc.) your knowledge goes a little bit deeper on certain areas, but you most likely need to know the same basic stuff on all associate-level certs for a single platform. For pro level exams, it means that you also possess some deeper level of knowledge on the subject and also possess problem-solving skills giving you the title Pro; a professional proficiency on the subject. This is not to be mistaken with a guru. Does a certificate make you a cloud engineer, to be quickly hired and put to a challenging customer project? Pro level cert certainly would imply that, but associate? No. An associate cert is a first stepping stone, meaning that you know some rules and best practices on the subject, but without any elbow grease on the platform, it amounts only to a good start. Of course, you can just put your study-cap on and study like possessed and pass a pro exam without never even launching a single instance, but I dare to say that it’s quite an uncommon scenario.
Why bother with associate certifications then? Well, as said previously, it indicates that you know the best practices of the platform, and while that might not land you your dream job, it’s still a quite big deal. When working in a cloud environment it’s very easy to deploy applications and create virtual machines, but it’s also really easy to do them wrong, using architecture not fit for cloud-age or in a worst case compromising security. Yes, you could just watch the videos and read the documents and be equally knowledgeable on the subject as someone with a certificate, but if you took all that time to study, why not do the certification when you are on it.

Clouds and order of conquest

If your work or side projects do not involve using any of the platforms and you are totally free to choose where to begin, I would (once again) pick AWS. AWS is by far the market leader and mastering it still opens more doors than GCP and Azure combined. If you are more curious for example about GCP, pick that. In studying practicality falls second to motivation.
If you really have a lot of free time on your hands and want to get certified on all the cloud platforms you can start wherever you want… but if you start with either AWS or GCP, do the remaining one before going for Azure. Terminology- and function-wise AWS and GCP are quite similar to the extent that Google has published even a quite handy cross-reference document from AWS experts to grasp their platform. Where terms for higher levels of abstraction are also similar for Azure, such as block storage and object storage the Microsoft way of doing cloud is still quite different. Understanding Azure requires you to forget how stuff is done in other cloud platforms and learn the Azure way. I did the AWS →  Azure → GCP trip and cannot recommend it to anyone.
Why use one sentence to describe the cloud platform study order, when you can confuse the ***t out of people with a diagram

Actual studying

An easyish path for studying for a certification goes like this

  1. Read the certification requirements
  2. Do some web course that goes through the relevant material
  3. Read the documentation for most important services
  4. Do some practice exams
  5. Ace the exam

Certification requirements and service documentation will be produced by the cloud platform organization and readable on their websites. Web courses and practice exams are usually provided by some third party, I will give hints on good places for platform-specific blogs of this series.
If you spend one hour daily, you should be able to do your first cloud certification in two months, even without previous experience. I suggest that before trying any of the Pro level certifications you get hands-on experience with some cloud platform for at least one year. It does not have to be that specific platform, as usually on those exams emphasis is more on “cloud thinking” and less on trivia.

Exam tactics

Even if there are differences in how the exams are done on different platforms, there are some universal strategies:

  1. Book your exam when you start studying
    • It works as a goal for your studies, giving that small ‘oomph’ to your motivation
  2. In the exam, don’t get stuck, time is of the essence
    • Mark the hard ones and come back later
  3. Don’t overstress
    • Even if you fail, you can always try again. You’ll also benefit from failure: now you know your weak points and can improve on that

Follow-up posts in this series:

Part 2: Amazon Web Services (AWS)
Part 3: Google Cloud Platform (GCP)
Part 4: Microsoft Azure

Tero Vepsäläinen

Tero Vepsäläinen

Tero is an ops-guy, coach and a service manager. He is responsible for the operative side of Gofore Cloud. He also likes to keep his hands dirty by planning and implementing cloud native systems.

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 5

Today was the last day of the conference and it’s starting to show. People are heading home, so sessions are not that crowded and last session ends around noon / 1pm. People wearing conference badges are thinning out and replaced with more regular tourists.
I managed to get into a very good chalk talk about Cloudformation given by Check Meyer and Ryan Lohan. So a big thanks to them! We had a good discussion about tooling, feature requests and so on. This is also something that many people might overlook. AWS prioritizes features and their implementation on the basis of feedback received from customers. You do not have to be APN partner, done certifications, or anything like that. As Amazon/AWS themselves say, they try to be the most customer-centric company there is. The most important thing is that instead of silently contemplating on a feature or bug you should make yourself heard. Join AWS Developers slack. Use Twitter, AWS forums, email their evangelists or talk to their employees at any event. If the barrier is still too big you can email me or my colleagues and we can bring your case to AWS. Make yourself heard!
Re:Invent 2018

Finally, some tips & tricks in case you find yourself in Re:Invent 2019!

First, don’t be too greedy. There are tons of good sessions but the thing is – the sessions are recorded and can be watched at a later time Youtube. Chalk talks, workshops and hackathons are not. You get to talk to product-teams or their representatives in those. I can highly recommend attending those and if there are competing sessions at the same time try to prioritize chalk-talks and workshops higher than breakout sessions.
Second, as I’ve mentioned in the first post, Las Vegas is designed to remove your money from you. There will be coffee/soda/water provided by Re:Invent as well as some snacks. The expo area is excellent if you want to eat something. There is usually food being served. Hotels are expensive and if you need to buy something there are multiple Walgreens (shops) on the Strip.
Third, keynotes by Andy Jassy and Werner Vogels are great. However, you should consider passing the keynotes if there are some other interesting happening at the same time. For example hackathons or gamedays. Keynotes are usually recorded and any announcements made are also published on Twitter, blogs and so on.
Fourth, when booking your schedule try to cluster up the sessions/workshops/etc you are attending. Moving from one venue to another takes time. Cluster your sessions on certain venues. For example, The Mirage and Venetian are very close to each other. Moving between them is much easier than moving from Mira/Venetian to Aria/Vdara. On the other hand, Aria/Vdara/MGM are situated relatively close to each other.
Fifth, pick your parties. There are TONS of different parties hosted by AWS partners. You cannot visit all so choose early.
Sixth, talk to people. That might not be the easiest thing to do especially for us Finnish whose culture is not extroverted. “Hey, my name is Aki. What do you with AWS?” and the conversation takes on.
Now it’s time to sign-off and get some rest before starting the long trip back home. Quoting Werner now it’s time to “Go build”.
Re:Invent
Jeff Bar, Abby Fuller and Simon Elisha before Twitch live

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 4

Today was Werner’s turn and boy he didn’t disappoint. The keynote was packed with some very welcome announcements. Again some of the announcements might be missing from the post but those can be found on Twitter, AWS blogs and from the news.
Re:Invent
As usual, Werner used a good portion of the keynote to emphasize how critical it is to have control over ones’ infrastructure. To avoid “black boxes” and to prepare for the fact that “everything fails all the time”. By now this shouldn’t be a surprise for anyone and if your architecture is not taking this into account you might be in for some nasty surprises in the future. In order to help companies assume best practices, AWS has a so-called “Well Architected Framework”. This set of guidelines and best practices should be familiar to anyone who is using AWS. Those of you who have done the AWS certifications it is the foundation to learn. Now AWS has come up with “Well Architected” self-service testing tool. It can be used to assess how well your development, operational and governance practices are aligned with the “Well Architected Framework”.
However the announcements today were mostly about serverless computing, namely AWS Lambda. There were some huge updates announced like custom runtimes, layers, ALB support, service integrations with step functions and IDE toolkits. The abstraction level keeps on raising and serverless-computing is becoming more and more mainstream.
Re:Invent
To easily sum up all the announcements it is now possible to have your lambda-functions be called by ALB while lambdas themselves can be running Rust, C++ or even Cobol and code can be shared between your functions. Your step functions can interact with other services and you can debug your lambda functions. Additionally API gateway now supports websockets. Streaming data has also become mainstream (pun intended) and even though AWS has Kinesis they announced managed Kafka. Running Kafka at scale is no trivial task so this should be a relief for anyone using Kafka but not wanting to handle the maintenance it requires.
Building systems without any servers at all is now much more feasible and serverless should nowadays be given very careful consideration when starting new projects. It could be said that serverless is a valid option for new development activities and instead of prejudice it should be embraced since serverless/Functions as a Service has come to stay.

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 3

Re:Invent
Today was a big day. Wednesday morning is usually the time for Andy Jassy (CEO of AWS) to give his keynote. This was the case this year also. The keynote was full of different announcements and it will be quite a task to go through all of them. I’ll leave some of them out and also include some announcements that weren’t in the keynote.

AI&ML

A huge chunk of the talk was about ML. Like Google has their TPU-processors to run ML models, AWS today announced Inferentia processors which should be available next year. Google has a head-start of several years so it is interesting to see how AWS’s offering can match Google’s. In addition to processors there were all kinds of enhancements so if ML is your thing you should definitely read the AWS blog posts about the new features. One thing I’m going to “kehdata” (Sorry English speakers, ‘daring’ is a rough translation of the term, but in Gofore it holds much more meaning. Email me and I’ll explain the concept) is AWS DeepRacer. DeepRacer is radio-controlled car with atom-processor, video-camera, ROS OS and so on. Would definitely be fun way for people to practice ML and reinforcement learning.

DynamoDB on-demand

Traditionally DynamoDB tables must have had both read capacity and write capacity defined and performance was pretty static in a sense (assuming your data is modelled correctly and you know your access patterns). Then came autoscaling which automatically tunes read/write capacity values based on your traffic. And we have the option for on-demand billing. Based on the blog posts and documentation the on-demand option scales very well right from the start without the need to specify read/write capacity. The cost model is interesting and more closely matches for example Lambda’s model where you only pay for what you use. If your DynamoDB usage is spiky then on-demand might be a very good fit, whereas continuous, huge volume of traffic is much more cost-effective to run on traditional mode where you specify the performance limits yourself.

Re:Invent

AWS Control Tower

For several years the best practice has been to distribute applications/services/teams into different AWS accounts and furthermore segregate development, testing and production into different accounts. Natural outcome from this is the fact that the number of AWS accounts in organizations has exploded. So far it has been pretty much DIY-solutions when trying to get overall vision of all your accounts. The bigger the organization, the more they feel pain from this.
Today AWS announced Control Tower which aims to alleviate some of these problems. Automating the set-up of a baseline environment, Control Tower uses existing services like Config, Cloudtrail, Lambda, Cloudwatch, etc. Read more about Control Tower from product page: https://aws.amazon.com/controltower/features/
As an AWS partner our company has a huge number of accounts, so for us Control Tower is a very welcome improvement. We are investigating what it exactly brings to table and where you might still need custom solutions. Stay tuned for more blog posts concentrating solely on Control Tower. Currently it is in preview, so signup and a bit of luck is needed to get early taste of it.

Amazon Timestream

Cloudwatch metrics isn’t exactly new. It has existed a long time and is de-facto solution for metrics collection from AWS services. In addition to Cloudwatch it is very common to see InfluxDB or Prometheus on our clients (usually combined with Grafana for visualization of time-series data).
Today AWS announced Amazon Timestream, a managed time-series database. Targeted solely for time-series data this puts Timestream into direct competition against Prometheus, InfluxDB or Riak TS or Timescale. Naturally this is excellent news if you don’t want to manage servers and want to have your time-series database as a service. No more EC2 instances running Prometheus, no more DIY solutions for HA and so on. AWS mantra has long been that let the “undifferentiated heavy lifting” for them and concentrate on your application and business-logic. Timestream follows this idiom perfectly. Timestream is currently in preview so signup and a bit of luck is needed to test it.

Quantum ledger database

Quantum ledger database and managed blockchain. Well now we have all the buzzwords in one blog. AI/ML handled already and now it is time for blockchain. AWS announced to day two services loosely related to each other, both are currently in preview. Quantum ledger database is database with central trusted authority, immutable append only semantics with the complete history of all the changes ever made. What does it have to do with blockchain? Well, all the changes are chained and cryptographically verified. There is huge amount of use cases! In addition to quantum ledger database AWS also announced managed blockchain which supports Hyperledger Fabric and Ethereum (Hyperledger is first, Ethereum coming later).

CodeDeploy

There were other new features launched that might stay under the radar if the focus is only on the keynote. One that is very relevant for my current project is the CodeDeploy’s ability to do native blue/green deployments into ECS and Fargate. (more here: https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/)
This will definitely be tested out next week.

AWS App Mesh

Also one more nice announcement was AWS App Mesh. Envoy-proxy based service-mesh  for EKS, ECS and K8s running on EC2. Like other service meshes the idea is that applications or micro-services do not need to have in-built functionality for service discovery (and possible load-balancing or circuit breaking). Service mesh takes care of it and applications are simpler to implement. App Mesh is in preview but more information can be found on Github: https://github.com/awslabs/aws-app-mesh-examples
Like I said this is not definite list of all the new changes. There are literally tons of new things! Let’s see if Andy left any announcements for Werner tomorrow (hopefully so).
Re:Invent

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 2

Re:Invent
Things are moving fast. Day 2 included a Partner keynote and didn’t contain so much technical announcements. The news in the keynote was mostly about the AWS Marketplace.

Marketplace

AWS introduced “Private marketplace”. Private marketplace allows customers to create a customized catalogue of pre-approved products from the AWS Marketplace. This allows administrators to select only products that are authorized or otherwise meet the criteria decided by your organization. The Private Marketplace can be customized with custom branding. Logo, texts and colour scheme can be changed to match your organization. All controls set up by administrators for the Private Marketplace are applied across AWS Organizations.
This kind of customization and pre-approved catalogues of SKUs can be useful for bigger organizations who wish to have control over what gets deployed. However, using this kind of feature will require vigilance on your offerings through the Private Marketplace. Introducing too much command & control may have a detrimental effect on agility and speed the cloud provides.
In addition to the Private Marketplace, AWS introduced container products in Marketplace. These container-products can be run on ECS, EKS and Fargate and they come in either as task definitions, Helm charts or Cloudformation templates. This announcement brings both VM’s and containers as first-class citizens on the Marketplace and it also offers sellers new options on how to distribute their software.

Ground Station

The marketplace wasn’t the only new fascinating release. Ground Station is service which will communicate with satellites in orbit. This basically means that launching a satellite and talking to it can be accomplished with a very small amount of money compared to the past when in addition to launch costs you would have to build your own ground station (radios, antennas, etc). Universities, schools and companies can now launch satellites if they want to. Space technology is being brought to the public and this will hopefully help to create new innovations and products/services.
I have to admit that “Satellite Communications as a Service” (should it be SCaaS) wasn’t even on the list when I’ve wondered what AWS might publish during the week. There are some caveats in the service though! You will need a Federal Communications Commission (FCC) license and Norad ID of your satellite and you will need to contact AWS in order to activate the service so you cannot just arbitrary book antenna-time and start shooting radio messages to the sky.

CloudWatch Logs++

Amazon CloudWatch Logs Insights. These announcements bring Kibana-like features to Cloudwatch. It can read multiple formats and especially useful feature is that it autodetects field-names if your logs are JSON-formatted. This feature might reduce the need for ELK-stack. This brings a whole new level on Cloudwatch dashboards.

DynamoDB

Re:Invent
Finally, it is time to talk about DynamoDB. Today it was announced that DynamoDB now has transactions. Having transaction support makes it possible to use DynamoDB in huge amounts of new use-cases. Now DynamoDB is controversial subject especially among the developers (this is my experience, YMMV). Modelling your data into NoSQL-database is not always straightforward. Developers don’t usually have to care that much about data access patterns but modelling data so it fits nicely into DynamoDB access patterns are the first thing they have to think about. It has been my observation that developers tend not to like it.
If you want to know more I suggest that you watch this year’s DAT401 session on Youtube once it is available (DAT401 – Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB).

Other news

Re:Invent
– Amazon Comprehend now understand medical text
– A new service AWS Elemental MediaConnect for video ingestion and distribution
Day 3 will be huge since Andy Jassy’s keynote is in the morning and it will be packed with updates.

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 1

Re:Invent 2018
Now that Re:Invent is at full speed the flurry of new features is relentless. Let’s go through a couple of the most noteworthy announcements from Day 1.

IoT

IoT has received a lot of love.

  • IoT sitewise (preview) is targeting entire plants and industrial equipment instead of small sensors normally associated with IoT.
  • IoT events (preview) is targeted for event correlation between multiple sensors and helps to recognise system-wide events and also enables alerting in such occurrences.
  • IoT greengrass is extended with external app-connectors, hardware root of trust (using Hardware Security Modules or Trusted Platform Modules) and more.
  • IoT Things Graph (preview) is an easy way for developers to build IoT applications. IoT Things Graph hides low level details and enables packaging as reusable components.
  • Also, Bluetooth Low Energy is now supported in Amazon RTOS (beta).

So overall there were quite a few announcements in the IoT space. If you are doing IoT there should some interesting features announced which makes life a lot easier.

AWS Transit gateway

AWS Transit Gateway
A new feature which allows users to connect their VPC’s and on-premise networks to a single gateway. Transit gateway acts as a centralised hub where VPC’s and on-premise connect as spokes. It includes support for dynamic and static routing. Since Transit gateway allows forwarding of DNS queries it is possible to resolve IP’s on other VPC’s that are connected to Transit gateway. In addition, there are monitoring, security and management using IAM and Cloudwatch. There’s also support for Equal Cost Multipath (ECMP) when routing via VPN connections to on-premise.
Overall Transit gateway is a huge step forward in networking. It makes creating complex topologies much easier. Especially enterprise-customers who might have multiple accounts used by multiple departments should now be able to create more uniform access to on-premise instead of connecting different VPC’s individually via VPN/Direct Connect.

AWS Global Accelerator

AWS Global Accelerator
If Transit gateway is useful for inter-VPC communications then AWS Global Accelerator is at least equally useful but targeted to the Internet. With Global Accelerator, applications can make use of the AWS global networking backbone. Global accelerator removes the need for managing different IP-addresses for different regions. Global Accelerator reserves 2 IP’s and anycasts on those globally. Traffic is directed to the AWS network in the nearest POP and from there it travels via the AWS network until it reaches its endpoint. Endpoints can be configured as different AZ’s or regions and are continuously health-checked. Global Accelerator greatly simplifies multi-region setups and provides smoother end-user experience.
This is definitely on my “gotta try it out”-list. One more step in making multi-region setups more common.

Nitros and more

With the new AWS hypervisor system called “Nitro” there is now a new instance type C5n featuring 100Gbps networking speed. Not much more about that can be said. More bandwidth is always good and for customers who are maxing out 10Gbps or 25Gbps this is a welcome relief.
Then we have a very interesting announcement. EC2 A1 instances. The interesting part is the 64bit ARM-processor with custom designed silicon called “Graviton”. That’s it – no x86. There are several Linux-distributions which can be run on these instances and it will intresting to see what kind of adaptation these machines will receive. Moving out of the AWS context it’s also interesting to see ARM-processors starting to take on areas normally dominated by x86 chips. Apple’s A10 chip and now Graviton from Amazon. Should Intel feel threatened? Time will tell.
Ever wondered what kind of server-fleet is running customers lambda-functions? Or Fargate-containers? Wonder no more since AWS has released “Firecracker” which is microVM for a running container. Will this technology find its way into other open-source projects?

Wrap up

Today’s announcements have been touching some very fundamental building blocks. Fundamentals have changed so much that developing multi-region applications or multi-account networking look a lot different than they did 24h ago.
More announcements and news are being released throughout the week. I’ll post again tomorrow tomorrow; let’s see what suprises AWS has prepared for us!

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring

AWS Re:Invent 2018, day 0

Before diving into the technical aspects and the new announcements I’ll take a moment and write a bit about the time before the actual conference. If you have never participated before there are a couple of ‘gotchas’.

Travel early…

Travelling from Europe is tiring and it’s better to arrive early to give yourself time to recuperate. Also when travelling from Europe remember that if you have a connecting flight inside the USA you will have to do the customs/CBP stuff when you first land. This combined with the fact that your luggage must be collected from baggage claim and re-checked into the domestic US-flight means that you should reserve enough time for your connection, otherwise you will experience added stress and potentially miss your connection.

When in Las Vegas… 

Remember that the whole city is designed specifically to separate people from the contents of their wallets. Everything costs and more often than not the price is not cheap. Las Vegas is in the middle of the desert and the air is dry, this is something you should take into account if you have sensitive skin. For me, the effect of the dry air is best visible on my beard. In Finland, it is usually much more curly due to more humid air. Here it straightens out considerably. I bet you wanted to know that 🙂
Las Vegas Boulevard aka ‘The strip’ isn’t that long on the map but it is long enough that moving between different venues takes time. If at all possible try to plan your schedule so that you minimize moving between venues. AWS has booked shuttle buses, there’s a monorail and you can walk but all the options take time and most of the time there will be a sea of other attendees moving in addition to you.
Also, contact other companies and people. There is a huge amount of different smaller gatherings and parties organized by different companies. The opportunity to network and to get to know people is huge. Attending your local AWS meetups will help you connect with others.
In the end, the conferences usually are best experienced first hand. The technical information can be learned from streaming videos and Youtube-videos. Being visible and networking is something that won’t be possible if you don’t attend. Furthermore, attending with only one person is overwhelming. Absorbing everything that is available is a huge task. Combined with networking and possibly having a booth is even more overwhelming. Consider sending more people, preferably 2-3 and if you have clients with you or are having a stand in the expo you need even more. Naturally, for a consulting business, this might be a pretty big investment. There are the costs of the trip itself (tickets, flights, hotels, per diem, etc) but in addition to these, the attendees are not doing billable work. So attending Re:Invent can also be seen as a commitment, you are committed to your partnership with AWS.

Actual announcements and news!

On Sunday, 25th of November the actual Re:Invent hasn’t started yet, however, there are some program items already on Sunday, more specifically the Midnight Madness and Tatonka challenge. Midnight Madness is a launch party or pre-party and Tatonka challenge is an event where attendees try to eat huge quantities of chicken wings. I had the advantage that I live in Tampere which is the wing-capital of Finland. Long story short: I didn’t win Tatonka but in addition to Tatonka and Midnight Madness there was the first official launch: AWS announced ‘AWS Robomaker’.
chicken challenge
Robomaker is intended to help developers creating robotic applications. AWS has an extended open-source framework “ROS” and included extensions so it includes connectivity to the cloud. Robomaker aims to be a complete development environment and includes an integrated development environment, simulation possibilities and fleet management.
Robotics is not an area which would come up in my daily work. However, if you are working in such field this new offering might be useful for you. I also hope that offerings like Robomaker will help different ecosystems to grow. Making robotics and robot-development accessible to a bigger audience will help innovation and might produce completely new products and offerings.
In addition to Robomaker, there were also some interesting announcements and new features that were published during the last few days. However, these might go unnoticed on the grand scale of Re:Invent. Here are some of the new features sampled by me (my listing is not comprehensive)
EFS infrequent storage class – coming soon. EFS will be getting an infrequent storage class much like S3 has. Naturally, this helps with cost-control and should be interesting to anyone using EFS.

  • Amazon Rekognition. Improved facial analysis, detects faces with greater accuracy and confidence. Should be interesting if your use case includes Amazon Rekognition.
  • AWS DataSync. New service to automate transferring data between on-premises storage and S3 or EFS. This service is mostly aimed at hybrid solutions and cloud migrations. Definitely, something to check out if you are working in the hybrid/migration space.
  • S3 batch operations – preview. Simplify the management of huge amounts of objects. Bulk operations are usually custom code, developed by AWS clients themselves. Batch operations aim to reduce the complexity that bulk operations usually require. Moving objects, replacing tags or managing access controls. Use cases are almost limitless ranging from compliance to backups to data migrations.

 
That’s it for Sunday in Vegas. Let’s see what Monday brings!

Aki Ristkari

Aki Ristkari

Do you know a perfect match? Sharing is caring