Deploying public cloud platforms is effortless and fast. Even beginners can achieve visible results quickly — a virtual machine only takes a few dozen seconds to set up. Just give your credit card details and get started on your project, and what you don’t know yet, you can easily learn as you go along, right?
However, ease of use conceals risks. Platform providers may invest heavily in issues such as security, but novices can easily wind up building an insecure environment. Expertise is also needed when choosing a solution for each purpose: reserved but unused capacity can lead to unnecessary costs.
Granting full freedom of action to projects can backfire later, by making them difficult to manage or by raising costs. Environments paid for with plastic and built hastily around project needs may include needless overlapping solutions.
Build safely on a solid foundation
As in construction projects, a safe and stable foundation guarantees a firm basis on which you can build, and provides opportunities for extensions. Repairing a foundation retrospectively can be laborious and incur unnecessary costs.
However, nothing is set in concrete at the beginning of a cloud project — the configuration can be updated as use expands. On the other hand, you should design the basic components related to e.g. account structures, network connection and authentication at the very start, in order to move forward on a firm basis. Role-based access and user management provides clarity and improves security. Well-designed network structures within, as well as outside, the cloud enhance security and boost intuitiveness. Management of confidentiality should also be clearly planned and communicated.
Where necessary, certain cloud services can be excluded from normal use — few users need the computing power of supercomputers, for example. However, such services will certainly accrue costs. In many cases, there are also good grounds for restricting the geographical location. It’s often best to start by opting for the users’ local region, particularly if it offers a sufficiently broad service portfolio. You can select a certain single region or, for example, the EU/EEA.
As use expands, matters tend to arise such as monitoring solutions of various kinds, log management, and increasing and ensuring fault tolerance.
An expert partner will help you make the right choices and define the basic principles. There is certainly no need to spend weeks poring over plans, and you can also make sure that the cloud foundation complies with the best practices recommended by platform providers.
Cloud Foundation or Landing Zone?
It’s easy to get lost in the terminology jungle. Different platforms may use slightly different terms for a cloud foundation, but they nevertheless mean the same thing. In the most straightforward cases, the provider offers a ready-made framework on which a cloud foundation can be built from code.
This means that design of the foundation does not in any way hinder the project from starting, but ensures smooth work and efficient resource use in the future.
For larger projects, it also makes sense to consider setting up your own cloud-focused competence centre to provide projects with support and expertise to ensure efficient use of the cloud.
Controlled expansion, efficient operation
A sensible and controlled basis also provides opportunities to expand into new areas; completely new solutions can be built on the cloud foundation, or existing ones transferred there. Centralised cost management enables cost optimisation and ensures overall visibility.
Centralised Devops practices to streamline product development and a highly automated cloud architecture minimise the need for manual work, ensuring efficient and modern cloud-based operations. Automatic recovery from faults is no longer the stuff off science fiction.
Towards more sustainable, genuine benefits
Thanks to public cloud platforms, ICT architectures can be built in hours or days, rather than weeks of months. However, the pace must allow for taking time to get some of the basics right and thereby guarantee efficient and secure use of the cloud in the future. Even in the cloud, security, scalability and ease of operation are not intrinsic values, but a well-designed foundation can help to ensure that you genuinely gain from a cloud-based environment.
Do you want to get the most out of cloud services?
Take a step towards your goal by signing up for our free GTalks webinar “Good basics of Cloud“ 11.3.2021 (11:00 to 12:30 EET).
Cloudy skies in black and white
Everyone has heard of cloud services, some even know what they are, and almost everyone has a fairly strong opinion about them. Opinions seem to be very divided on cloud services.
Some experts are very sceptical about the cloud. It is considered vague and, above all, unreliable in many respects. Cloud services are ‘out there somewhere’, are operated and processed by ‘whoever’, and are vulnerable to network connections being down. Many organisations question whether key services can be moved to the cloud, while others still ban the use of cloud services altogether. This approach can be regarded as an unwritten “no cloud” strategy.
The other extreme is made up of cloud groupies. The cloud is viewed as an attractive, all-purpose technology to which all services should be moved immediately, and then only used from there. This ‘cloud technology groupie’ line is equivalent to an undocumented and unaccepted ‘cloud only’ strategy.
Because these two very opposing views often clash in the same organisation, the latter should outline its approach to cloud services — a documented, carefully considered and widely accepted cloud vision and cloud strategy are needed.
A cloud or just hot air?
When discussing cloud services, the first problem tends to be that different people have different views of what cloud services are. For some, ‘cloud’ means ‘anything to do with IT from outside our data centre.’ Many are beginning to take a strong stand on whether AWS virtual platforms or development tools are better than MS Azure. Some recall that systems are perhaps being acquired on an SaaS basis.
When developing a cloud service policy, it is a good idea to take a stand on cloud services — well-established and highly productised cloud services with large customer bases, which are highly adaptable to customer needs without prior commissioning. Not every virtual platform run by a supplier is a cloud service.
Discussions about cloud services are often limited to cloud platforms. However, the Finnish public administration’s cloud guidelines include the sound principle of having a cloud policy that covers all cloud operating models — IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service) and BPaaS (Business Process as a Service). Good cloud policies cover all of these.
From black and white to shades of grey — the smart approach
Cloud services are here to stay. Ruling them out recalls the negative attitude of gaslight experts to electricity — it won’t work and it’s probably dangerous. For example, off-the-shelf software has moved, or is actively moving, almost exclusively to the cloud. In practice, new software of this type is no longer being developed for setup by the customer. Over the next few years, it may even become difficult to find locally installed software that meets operational needs. So the issue is no longer whether or not to use cloud services, but how to use them securely and benefit from them.
It is time to let go of the ‘all or nothing’ attitude to cloud services. Cloud service users now adopt the so-called Cloud Smart approach, assessing the suitability of cloud services on a case-by-case basis. Most services can be moved to the cloud, but some cannot due to regulations or the need for continuity.
A cloud strategy and detailed development path in support of change
Define your organisation’s very own strategy for benefiting from cloud services. Draw up an overview of all the ways in which your organisation can leverage cloud services. On what grounds and based on what policies will cloud services be bought or developed, and with what aims? Explore all cloud service models (IaaS, PaaS, SaaS and BPaaS) and the entire cloud solution life cycle.
A cloud vision and strategy will provide an excellent, jointly agreed main model for leveraging cloud services in your organisation. However, you will need more than a strategy to realise the benefits of cloud services. Draw up a systematic, comprehensive and measurable roadmap for developing cloud service capabilities (expertise, technology, management models, instructions and procurement procedures).
Be bold, document the process in detail, accept and commit.
Do you want to get the most out of cloud services?
Take a step towards your goal by signing up for our free GTalks webinar “Good basics of Cloud“ 11.3.2021 (11:00 to 12:30 EET).
The latest X-Road Community Event was a huge success. With 150+ participants from 22 countries, it is evident that the interest and tangible actions for enabling digital societies are hot topics among the nations worldwide.
The event was organised by the Nordic Institute for Interoperability Solutions (NIIS) who are developing and managing an open source data exchange solution called X-Road. X-Road is the basis for data exchange in the public administration in Estonia and Finland, both of whom are founding members of the organisation. Lately, Iceland and the Faroe Islands have also joined as partners – and various countries and regions in Europe, Africa, the Americas and Asia have run trials and adapted X-Road for their use. See the X-Road world map for details:
Currently, Gofore is the sole developer of the X-Road core for NIIS through a public procurement.
X-Road version 6 is deployed in Finland and Estonia, and Iceland will follow suit shortly. The Faroe Islands and some other countries are preparing to migrate their platform from version 5 to 6.
At the event plans for the development of the next version of the software, X-Road 7 Unicorn, were introduced and presented in various workshops by experts from Gofore, the Finnish Population Register Centre (VRK) and the Estonian State Information System Authority (RIA). NIIS CTO Petteri Kivimäki stated: “X-Road is not developed for us [NIIS] but for you [nations and organisations]”, so it is evident that close collaboration between the development of the core and existing and planned local installations is highly valued. The MIT-licensed open source software enables maximum utilisation and all users are welcomed to contribute and create pull requests for additional required features.
Planning to utilise X-Road?
If your country or organisation has various data sources and siloed services, taking X-Road into use will enable a fluent, fully secured and easily manageable solution to exchange data between sources. Such fluent data exchange enables endless possibilities for derivative machine-to-machine applications and easy cross-border data exchange between countries. Of course the ultimate target are smooth human-centric services for citizens, which often require additional trusted digital identity management system to be build alongside information systems connected by X-Road.
Gofore has experience and deep expertise in all layers of X-Road and digital identity design, development and deployment and we are looking to support their utilisation at an international scale.
If you want to hear more, please contact the author or download the leaflet below – it will provide more detail on why, what and how X-Road would help to achieve a digital society in your context.
Interested in reliable, secure and easy to use integrations for digital services? X-road provides this and more. Download our X-road leaflet to learn more about how this could be utilised in your business: X-road leaflet
Tampere, Finland, May 02, 2019 — Gofore, a Finnish digitalisation specialist with international growth plans, today announced that it has joined the Google Cloud Partner Program as a services partner giving Google Cloud customers the ability to benefit from Gofore’s cloud capabilities.
Google Cloud Platform is the 3rd big cloud provider Gofore has partnered with (AWS, Azure and now GCP). We’re proud to be GCP’s 8th partner company in Finland. This bolsters Gofore’s standing as a leading Cloud Platform consultancy company.
As a Google Cloud partner, Gofore offers customers consulting services no matter what their technology is, whether moving to a cloud system, or planning and building their own cloud infrastructure. In addition, we provide reliable service management on different platforms.
Key features of Gofore’s cloud offering include:
- Agile Application Development with modern tools and technologies
- Cloud consulting in all aspects of the cloud
- User training, workshops
The Google Cloud Platform brings many benefits to our customers including:
- A truly global private network
- Great developer experience (When asked, most developers would prefer to use GCP for their new projects)
- Google Cloud is the clear leader in the AI, ML and container market space
- Saving costs utilizing the performance and scalability of the Cloud
- A data centre in Finland for lower latencies for our clients in Finland
Whether you need help with Big Data and creating Data Pipelines, software development, lift and shift or creating cloud infrastructure, Gofore has your back. We have over 7 years of Cloud experience delivering complex customer projects and creating value for our customers.
“Gofore is excited to join The Google Cloud Platform Partner Program. This partnership allows us to expand our proven expertise in the Cloud, bring the benefits of Google Cloud Platform and provide additional value to our customers.”, said Timur Kärki, CEO, Gofore.
Quite often I hear the claim “on-premise is more secure than cloud”
Having worked in both the on-premise and cloud worlds for several years, this is an informed opportunity to dissect such claims into smaller subsets and do some comparisons.
Regarding cloud environments, I’ll stick with Amazon Web Services (AWS) which I am the most familiar with.
Let’s start with physical security.
A properly configured server room must have the following topics covered:
- Deny unauthorised access
- Ways to prevent and detect tampering
- Although not directly related to intrusion or unauthorised use, a fire alarm and fire suppression system must be present
- All rack cases must be locked so that, for example, thumb drives cannot be inserted
- Backups must reside in a remote location and must comply with the same security policy as the on-premise source
In cloud environments, the above-mentioned best practices are the responsibility of the service provider – if not, please change your provider – quickly!
With such best practices in place, a cloud customer doesn’t need to be concerned with the hardware aspects when designing a cloud-based system.
Regarding software security, the following topics must be covered:
- Keep software up to date
- Scan for vulnerabilities
- Scan for misconfigurations
- Security is layered
<shameless plug>If you missed my previous post, some of these topics were covered in greater detail here: https://gofore.com/computer-security-principles/ </shameless plug>
Another often heard claim is “Data is so sensitive that it cannot reside in the cloud”
Right, so why is that computer connected to the Internet?
Everything is crackable and the firewall in front of the computer is just a teaser in the game. If the data is that sensitive, then it must be in an encrypted format. You’ve got this covered, right? I hope so!
For these kinds of best practices, AWS offers great tools:
- Encrypted S3 storage (object storage)
- A Systems Manager Parameter Store to encrypt all values going into a database
- Key Management Service to automate key handling, including key rotation and audit trail
(to name just a few examples)
If a virtual machine is being run, one should be aware that Spectre and similar hardware vulnerabilities will pose a danger to some extent; at least in the cloud where resources are shared.
An evil-minded attacker’s virtual machine instance will need to be located in the same host machine in which the victim’s instance is running.
These kinds of vulnerabilities are patched very swiftly as soon as the fix is available. Especially since it poses a danger to the core business. Therefore these attacks are short-lived – unless a new zero-day exploit is found. And even then, the zero-day exploit must be applicable and:
- Moderately quick to exploit to have benefit
- Success rate must be fairly high and it must give enough permissions to control the needed resources
An improvement would be to use cloud-native components to handle load balancing, container orchestration, message brokering and so on.
Why? Because those are constantly audited by the cloud provider, therefore resulting in a smaller attack surface compared to handling the whole operating system and its software components (and their updates).
Copying an insecure application into the cloud doesn’t make it magically safer.
Regarding security standards, AWS complies with the following letter and number bingos:
- SOC 1/ISAE 3402, SOC 2, SOC 3
- FISMA, DIACAP, and FedRAMP
- PCI DSS Level 1
- ISO 9001, ISO 13485, ISO 27001, ISO 27017, ISO 27018,
These standards fulfil the requirements for Nasdaq, the US Department of Defence, and Philips Healthcare, just to mention a few high profile customers. These organisations take security seriously and have a huge budget for their security teams.
In the AWS Aurora database is a Maria/PostgreSQL-compatible relational database service (RDS) that offers automatic scaling and updates.
Major version updates can be done this way too, though it’s against best practices to upgrade without testing. You have been warned! That diminishes the burden of updates drastically.
The biggest cloud providers, namely Amazon, Google and Microsoft, have some of the most talented people in the field working on their products to keep their customers’ data secure. Compare this to on-premise scenarios where, in the worst cases, it’s a one-man show. If (s)he is not really interested in security, then it’s a security nightmare waiting to be unleashed.
Nothing protects faulty configuration choices in the cloud either, though some things are harder to make globally reachable by default.
In conclusion, cloud is not a new kid on the block anymore.
Learn your environment and implement with best practices.
Correctly configured cloud is secure and might save the administrator/DevOps/whatever from nightless nights.
You can learn more about gaining cloud certifications in our blog series starting here: https://gofore.com/en/getting-certified-on-all-cloud-platforms-part-1-introduction/
Where GCP and AWS are like two peas in a pod, Azure is something different. Azure has taken much of the things that can be found in more traditional Microsoft infrastructure and taken it to the cloud. Also, much thought has been given to the idea that cloud should work as an extension to that on-prem-Microsoft-infra. In recent times Azure has taken steps towards becoming more like its competitors; it has dropped pretty much every other container orchestration service other than Kubernetes and is slowly adopting Availability Zones (for example).
Use cases for Azure are mostly those projects that go heavy on Visual Studio, AzureAD or to be paired with existing on-prem-MS-infra. Out of the cloud offerings, Azure is my least favourite, maybe because I’m not too into the Microsoft ecosystem in the first place.
As with Google Cloud, Azure certifications are a living moving thing: they evolve over time and are less like snapshots of an era. In addition to this, they (rather painfully) retire courses and arrange their course palette pretty often. I did all my certs in Jan-Feb 2018 by doing 70-535, 70-533 and 70-473, acquiring MCSE, MCSA and MSP certificates. Writing this blog post I noticed that a) MCSE, MCSA and MSP are no longer a thing, but the certifications are now role based (like with AWS and GCP) and, b) all my exams (except for 70-473) have been retired.
All the new certifications can be found here https://www.microsoft.com/en-us/learning/browse-new-certification.aspx. You might find some resemblance with AWS certificates, as the naming convention is conveniently pretty much the same. One thing still differentiates Azure from AWS and GCP; you don’t always do a single exam to earn a certificate, sometimes you have to do more. Also, the certs have prerequisites, which is something that AWS gave up in 2018. You can find all the exams here: https://www.microsoft.com/en-us/learning/exam-list.aspx
Here is a chart of certs and correlating exams:
|Azure Administrator Associate||AZ-100
AZ-102 (transition exam for people who passed Exam 70-533)
|Azure Developer Associate||AZ-202 (transition exam for people who passed Exam 70-532)
|Azure DevOps Engineer Expert||AZ-400 (in beta)||Azure Administrator Associate or Azure Developer Associate|
|Azure Solutions Architect Expert||AZ-300
AZ-302 (transition exam for people who passed Exam 70-535)
|Azure Administrator Associate or Azure Developer Associate|
The transition exams will retire in June 2019 so if you want to upgrade your certs, do so fast. I thought I would be done with certs for a while, but it’s possible that I’ll spend a day this spring studying the AZ-302 and trying the transition exam. I’m not especially thrilled by this change of plans, but I’d hate to waste all the time I spent doing Azure exams either.
Even if the exams have changed, I think the training methods for Azure still stay the same. When I searched for viable online courses the pretty much only good source was Scott Duffy at Udemy and I know a few other people who have passed the Azure certs by following his advice. I’m quite confident that I saw Scott’s courses also on acloud.guru at some point, but can’t seem to find them there anymore.
You should also use Microsoft’s special offers when they are applicable. Usually, you get an exam, an exam retake and a practice exam with the price of only a single exam. As the exams are new though, first check https://www.mindhub.com for the availability of the practice exam; sometimes it takes time before the practice exams can be bought. The practice exams give you a good insight into the exam area and instead of just “right” and “wrong”, also guidance on where you can learn more about the subject. Model of the exam and a free retry also give some room for experimenting (or brute force) also taking off some pressure from the exams.
Taking the exam
Where Azure shines is that you can take the exam at your workplace or at home, given that you have a webcam and a mic on your machine. The exam situation is rather silly as they inspect a lot of stuff in the room and on your body, but you can schedule an exam for the same day and get done with it. The system failed me only once, so to learn from my mistake: if you do the system check early on (which I suggest if you have never done Azure exams this way before) then do the system check the same day as the exam. For me the exam software failed somewhat miserably as there was a new update that wouldn’t start, robbing me of a single exam try. As it was a software failure, I managed to get a new try for free, but I had to wait a week for the issue to be diagnosed, during which I couldn’t try the exam again.
Phew. This concludes the blog series on cloud certifications. The rather anti-climactic ending was not of my design, as my exam specific knowledge became obsolete in one giant whoop. If you are interested in cloud and already have some experience with it, check if some of our open positions would suit you at https://gofore.com/en/careers/
Other posts in this series:
In this blog post, it becomes obvious that out of all the certification paths available I’ve chosen the one more related to the Ops-side of the DevOps-spectrum. I’ve been there when infrastructure couldn’t be considered a code when a server needed fitting into a rack*, and never written “real code” in my life apart from simple Perl/Python scripting. With cloud, I’ve continued to build upon that foundation. This reflects my certifications: I’ve done the Ops-path, but left the Dev-side totally untapped (this trend continues on the next [Azure] blog post). After doing my share of cloud certifications, I’ve dipped my toes into the realm of modern web programming. Does this mean that I should do the Dev certs next? Nope. The Pro level certs take a lot of time and I don’t see the investment of my free time paying back any time soon.
* Yes, cloud computing is just a cluster of computers that also needs fitting into a rack and petting, but it’s not really a mainstream job for a system administrator anymore, is it?
Ok, enough of my motivational circumstances. What about GCP then? I think out of all the clouds it’s the most user-friendly: the web console is just way better than AWS’s, the shell you get on the web console is nice and I would totally like to use App Engine on a project (with live debugging and other shiny thingies). Stackdriver as a whole still feels a little weird to me, but it has tons of functionality. And with the Google Kubernetes Engine being somewhat the best in the business, I would choose GCP whenever I’d have to run a container production load.
I’ve done the Associate Cloud Engineer and Professional Cloud Architect certs. There are also Professional Data Engineer and Professional Cloud Developer certifications and even one on G-Suite, but I’ve got no experience in those. Unlike AWS and Azure, Google actually does it’s own web training.. and they are far better than any of the 3rd party ones. How great is that! They also send you some (mostly useless) swag when you complete any certificate, which is a nice gesture.
You can find all the exams here https://cloud.google.com/certification/
Picture1: The least worst swag I got from Google
I did the Associate Cloud Engineer beta exam as a sort of practice exam for the professional one (because why not?). It took a few months for the exam results to arrive (for beta exams it tends to be that way). I’d actually totally forgotten about the whole exam by then, and it turned out that that I was one of the first one hundred to get the certificate. The exam was the least theoretical cert exam I’ve ever completed. If I’d have to give any hints, it would be to get familiar with the web console and SDK Tools. Use them, preferably in a practice project. Yes, you have to know some commands. Yes, you have to know where stuff is in the web console. This is an exam for a Cloud Engineer, it measures how well you can do stuff.
Out of all the cert exams I’ve done, the Professional Cloud Architect was my favourite; hard, but interesting. It threw really-really-really odd curveballs at me that couldn’t be prepared for and learning trivia by heart had no value in this one. It measured if you knew your cloud and it means everything that comes with it. It was also the exam I spent the most time studying for. I think it took some three months at one to two hours/day for me to get comfortable with the whole exam area.
For both the associate and pro I suggest the Google made Architecting with Google Cloud Platform Specialization. It might be a little too deep for the associate, but better too deep than too shallow.
Also for the pro, I would supplement the studying with the following:
- Linux academy course part 3
- Pt 1 & 2 are far worse than the ones on coursera, but pt 3 has some hints for the exam and also practice exams
- Read Google’s docs on the relevant services – spend extra effort on these two:
- Google’s own documentation on how to build a cloud architecture on different scenarios
- A cheat sheet of sorts on all Google Cloud services
- A quick intro for people coming with an AWS background
- Google’s own documentation on how GCP compares with AWS
- A practice exam provided by Google
- Study the case studies at the end of certification description, they might appear in the exam
Also as Google’s certs are a moving target (they get updated constantly) keep up with the news on their blog and I strongly suggest that you watch the Google Next speeches on relevant services from ’18
Doing the exam
At least in Finland, you can only do the exam as a proctored exam, where someone observes you doing the exam, and is only available in either Espoo or Helsinki. You can reserve your exam time in https://cloud.google.com/certification/.
Unlike AWS and Azure at the end of the exam you get information whether you passed or not, but no indication on how well you fared. No points, no percentages, nothing. I think this can be extremely frustrating for people who do not pass the exam, as they have no idea of whether they were even close. Just hope you’ll see the “you passed”-message and to get to order some swag for yourself.
I’m also head of GDG Cloud Tampere and we’ll be hosting many nice events this year. Join the fun at https://www.meetup.com/GDG-Cloud-Tampere/
Other posts in this series:
My journey to cloud environments started with AWS. First I staggered through the internet trying to find a good guide for understanding cloud computing, different platforms and terms used. I spent considerable time on this (retrospectively) useless wandering until I started studying for the Solutions Architect Associate certification and got my first bite of well-structured course material on my first ever cloud: AWS. Even if some people consider certifications silly and a waste of time, the certification courses themselves are a brilliant way of grasping how a cloud platform works.
My personal opinion on AWS is that it may not be the most user-friendly platform, but it’s still the most versatile one out there. If there is something that you can do in cloud, you can probably do it in AWS. This being the case, I would choose AWS as a running platform unless there is some reason not to.
Picture1: AWS’s chart of all the certificates
An up-to-date list of AWS’s certifications can be found here. No new associate or pro level certs have been added during the time I’ve been around the scene, but the existing exams have been slowly updated to match the AWS of 2019. The names of the certifications have stayed the same though. Unlike Azure and GCP where exams are kept up to date, AWS’s exams represent a snapshot of a given time. From the exam taker’s perspective this is a good thing, but from a practical implementation perspective, it’s a bad one. The exam-taker expects the study material to stay constant for years, and as such there are lots of exam material online to aid you. In practice, you end up studying old material and usually the newest Re:Invent stuff is not in the exams. The worst (or best?) example of this is the reserved instance classes (heavy/light utilization) that are obsolete and any official documentation can’t be found about them, but still, I’ve found questions on them on both Solutions Architect Associate & Professional exams. Both of these exams have now been updated, and I doubt there are any reserved instance class related questions, but in a few years, there will be something similar.
Certifications used to be valid for two years, with one year grace period with the option of doing a simple re-certification, even though your cert has expired. Because that system was rather confusing the new exams are now just valid for three years during which they can be extended by doing the re-certification exam. There used to be a requirement on passing a specific associate exam to even have the possibility to try a Pro cert exam, but this restriction was removed late 2018. This doesn’t mean that you should go straight for Pro certifications unless you have worked with the platform for years using a plethora of services.
Picture2: Associate exams and shared material
Associate certs share around 70% of the same base of “this is AWS”-material with each other concerning networking, IAM, storage etc. If you do the Solutions Architect Associate certification first (which I recommend) you can do the Developer and SysOps courses with few days of prepping. Should you choose this method? Well, it looks better on your CV but really it brings little extra to the table. The Developer certification material more thoroughly covers parts of the developer centred material such as DynamoDB and SysOps and has some more details on OpsWorks and Elastic Beanstalk. You really only have to study the difference between your first associate exam and the new one you’ll be doing. If you got a sponsor for your exam fees and you want to boost your CV, go for it.
The pro level exams cover some common ground with each other, both being AWS exams, but they share fewer details when compared to the Associate exams. For me, It took a few months to study for both exams individually. I initially started reading for the DevOps Pro right after I got my associates done, but it was too steep a hill for me to climb, and I ran out of motivation around halfway through the course materials. One year later with actual AWS projects under my belt, I read through the materials which now felt easy and passed the exam quite easily. I tried studying for the Architect Pro after that, but hit that familiar wall once again, fast forwarding to 1 year of AWS projects and told my colleague how “this course material brings very little new to me” and passed the exam.
For professional certifications I have only one piece of advice:
Do. Actual. Projects. On. Cloud.
After that they are easy.
I think that owners of Pro level certifications are somewhat respected if such a term should appear in your CV but once again I don’t really think there is much difference if you have one or two.
Unlike general ones, the specialities share very little with each other, only concentrating on one thing and going deep into it. I have to admit that I haven’t done any of the special certs, only skimmed through their content. I intended to do the Advanced Networking certification, but gave up around half-way through the course material when it was going through BGP’s finest details. As AWS certificates go, they are quite new with new ones coming every now and then, so I don’t know how much reputation you get by passing them.
I totally and wholeheartedly suggest that you use www.acloud.guru for studying. The guys and gals there are doing a fabulous job on online courses. acloud.guru has a practice exam (usually) at the end of their course, which is quite sufficient. You can also buy a practice exam from AWS with some 20 questions, but it’s usually badly written and even if you know your stuff (and pass the actual exam) you might end up with just 60% of the questions correct. If you are feeling cheap and you are gonna do only one certificate, you can grab a acloud.guru course dirt cheap from www.udemy.com; they have a sale going on every day.
In addition to acloud.guru I complimented the materials with those on www.linuxacademy.com on Pro Solutions Architect course, as at that time there were no practice end exam options on acloud.guru and I felt that some of the services were not explained in enough detail.
I read every whitepaper that is suggested in the courses, and I also read FAQs and documentation on the most important services. As noted on the first blog post on this series do the following:
- Read the certification requirements
- Take part in a web course that goes through the relevant material
- Read the documentation for the most important services
- Do some practice exams
- Ace the exam
What I did forget to mention though, is using the service. For every service on the exam, you should use the actual service. If you got a pet project to use them on, great. If you don’t, just click through the dialogues so that you understand every option and how it influences the end result. For Pro certs you also have to do some work with cloud computing, otherwise, the wall is too high for you to climb, sorry.
Doing the exam
You can reserve your exam time on www.aws.training. If English is not your first language, remember to mark so on the portal BEFORE reserving the exam; this gives you some extra time. You can do this on the AWS Training and Certification portal by clicking “Upcoming events”→”Request Exam Accommodations”→ “Request Accommodation” → “ESL +30 MINUTES”.
The options are basically to do a monitored exam where a person watches how you are faring, or use a kiosk. In Finland, the observed options are located in Helsinki and kiosks can be found in Helsinki and Tampere. There has been a lot of conversation about the kiosk PCs booting in the middle of the exams, possibly multiple times, and how they are monitoring if you cover your mouth, thus creating more stress. Personally, I liked the kiosk experience, as I could do the exam on the other side of the road from our office in Tampere. Yes, the passport recognition mechanism was broken, as told by the receptionist, and the person on the other end of the line wouldn’t or couldn’t understand that, requiring me to start exam registration a few times over, but the exam itself went quite smoothly.
Right after the exam, it tells you how did you fare and in a few minutes you get the results also to your email, with percentage grading on different areas of the exam.
Other posts in this series:
This blog post is the first of my new blog post series that will be published in the following weeks. The aim is to cover getting certificated on all the major cloud platforms currently (1/2019) in Europe: Amazon Webservices (AWS), Google Cloud Platform (GCP) and Microsoft Azure. I’ve done my Pro level certs on all these platforms quite recently (within a year) so I’ve some knowledge on the issue. I’ve written texts similar to this on our internal wiki, but as there are no secrets in there, I decided to re-write the material in more reader friendly format (and less like a stream of consciousness).
Like the good authors of certification guides I’m not claiming that you will get certified by doing what I advice you to, but I can safely say that following my advice raises your probability of success.
This first blog post is labelled ‘introduction’. I will cover general stuff about certifications, suggested path for going through the clouds (for the hardcore “gotta get ’em all”-cloud people out there) and also some general notes on preparing for the certifications. The later posts will each focus on one of the cloud platforms.
The three musketeers
Correlation between getting a certification and knowing your stuff
Before going to the actual how-to part, let’s think for a while what a certificate actually is and does getting one hold any significance. Being a holder of a certificate means that you have passed an exam where your general knowledge on the platform has been tested. Depending on your path (development, architecture etc.) your knowledge goes a little bit deeper on certain areas, but you most likely need to know the same basic stuff on all associate-level certs for a single platform. For pro level exams, it means that you also possess some deeper level of knowledge on the subject and also possess problem-solving skills giving you the title Pro; a professional proficiency on the subject. This is not to be mistaken with a guru. Does a certificate make you a cloud engineer, to be quickly hired and put to a challenging customer project? Pro level cert certainly would imply that, but associate? No. An associate cert is a first stepping stone, meaning that you know some rules and best practices on the subject, but without any elbow grease on the platform, it amounts only to a good start. Of course, you can just put your study-cap on and study like possessed and pass a pro exam without never even launching a single instance, but I dare to say that it’s quite an uncommon scenario.
Why bother with associate certifications then? Well, as said previously, it indicates that you know the best practices of the platform, and while that might not land you your dream job, it’s still a quite big deal. When working in a cloud environment it’s very easy to deploy applications and create virtual machines, but it’s also really easy to do them wrong, using architecture not fit for cloud-age or in a worst case compromising security. Yes, you could just watch the videos and read the documents and be equally knowledgeable on the subject as someone with a certificate, but if you took all that time to study, why not do the certification when you are on it.
Clouds and order of conquest
If your work or side projects do not involve using any of the platforms and you are totally free to choose where to begin, I would (once again) pick AWS. AWS is by far the market leader and mastering it still opens more doors than GCP and Azure combined. If you are more curious for example about GCP, pick that. In studying practicality falls second to motivation.
If you really have a lot of free time on your hands and want to get certified on all the cloud platforms you can start wherever you want… but if you start with either AWS or GCP, do the remaining one before going for Azure. Terminology- and function-wise AWS and GCP are quite similar to the extent that Google has published even a quite handy cross-reference document from AWS experts to grasp their platform. Where terms for higher levels of abstraction are also similar for Azure, such as block storage and object storage the Microsoft way of doing cloud is still quite different. Understanding Azure requires you to forget how stuff is done in other cloud platforms and learn the Azure way. I did the AWS → Azure → GCP trip and cannot recommend it to anyone.
Why use one sentence to describe the cloud platform study order, when you can confuse the ***t out of people with a diagram
An easyish path for studying for a certification goes like this
- Read the certification requirements
- Do some web course that goes through the relevant material
- Read the documentation for most important services
- Do some practice exams
- Ace the exam
Certification requirements and service documentation will be produced by the cloud platform organization and readable on their websites. Web courses and practice exams are usually provided by some third party, I will give hints on good places for platform-specific blogs of this series.
If you spend one hour daily, you should be able to do your first cloud certification in two months, even without previous experience. I suggest that before trying any of the Pro level certifications you get hands-on experience with some cloud platform for at least one year. It does not have to be that specific platform, as usually on those exams emphasis is more on “cloud thinking” and less on trivia.
Even if there are differences in how the exams are done on different platforms, there are some universal strategies:
- Book your exam when you start studying
- It works as a goal for your studies, giving that small ‘oomph’ to your motivation
- In the exam, don’t get stuck, time is of the essence
- Mark the hard ones and come back later
- Don’t overstress
- Even if you fail, you can always try again. You’ll also benefit from failure: now you know your weak points and can improve on that
Follow-up posts in this series:
Today was the last day of the conference and it’s starting to show. People are heading home, so sessions are not that crowded and last session ends around noon / 1pm. People wearing conference badges are thinning out and replaced with more regular tourists.
I managed to get into a very good chalk talk about Cloudformation given by Check Meyer and Ryan Lohan. So a big thanks to them! We had a good discussion about tooling, feature requests and so on. This is also something that many people might overlook. AWS prioritizes features and their implementation on the basis of feedback received from customers. You do not have to be APN partner, done certifications, or anything like that. As Amazon/AWS themselves say, they try to be the most customer-centric company there is. The most important thing is that instead of silently contemplating on a feature or bug you should make yourself heard. Join AWS Developers slack. Use Twitter, AWS forums, email their evangelists or talk to their employees at any event. If the barrier is still too big you can email me or my colleagues and we can bring your case to AWS. Make yourself heard!
Finally, some tips & tricks in case you find yourself in Re:Invent 2019!
First, don’t be too greedy. There are tons of good sessions but the thing is – the sessions are recorded and can be watched at a later time Youtube. Chalk talks, workshops and hackathons are not. You get to talk to product-teams or their representatives in those. I can highly recommend attending those and if there are competing sessions at the same time try to prioritize chalk-talks and workshops higher than breakout sessions.
Second, as I’ve mentioned in the first post, Las Vegas is designed to remove your money from you. There will be coffee/soda/water provided by Re:Invent as well as some snacks. The expo area is excellent if you want to eat something. There is usually food being served. Hotels are expensive and if you need to buy something there are multiple Walgreens (shops) on the Strip.
Third, keynotes by Andy Jassy and Werner Vogels are great. However, you should consider passing the keynotes if there are some other interesting happening at the same time. For example hackathons or gamedays. Keynotes are usually recorded and any announcements made are also published on Twitter, blogs and so on.
Fourth, when booking your schedule try to cluster up the sessions/workshops/etc you are attending. Moving from one venue to another takes time. Cluster your sessions on certain venues. For example, The Mirage and Venetian are very close to each other. Moving between them is much easier than moving from Mira/Venetian to Aria/Vdara. On the other hand, Aria/Vdara/MGM are situated relatively close to each other.
Fifth, pick your parties. There are TONS of different parties hosted by AWS partners. You cannot visit all so choose early.
Sixth, talk to people. That might not be the easiest thing to do especially for us Finnish whose culture is not extroverted. “Hey, my name is Aki. What do you with AWS?” and the conversation takes on.
Now it’s time to sign-off and get some rest before starting the long trip back home. Quoting Werner now it’s time to “Go build”.
Jeff Bar, Abby Fuller and Simon Elisha before Twitch live