Security in the cloud

Quite often I hear the claim “on-premise is more secure than cloud
Having worked in both the on-premise and cloud worlds for several years, this is an informed opportunity to dissect such claims into smaller subsets and do some comparisons.
Regarding cloud environments, I’ll stick with Amazon Web Services (AWS) which I am the most familiar with.

Physical security

Let’s start with physical security.
A properly configured server room must have the following topics covered:

  • Deny unauthorised access
  • Ways to prevent and detect tampering
  • Although not directly related to intrusion or unauthorised use, a fire alarm and fire suppression system must be present
  • All rack cases must be locked so that, for example, thumb drives cannot be inserted
  • Backups must reside in a remote location and must comply with the same security policy as the on-premise source

In cloud environments, the above-mentioned best practices are the responsibility of the service provider – if not, please change your provider – quickly!
With such best practices in place, a cloud customer doesn’t need to be concerned with the hardware aspects when designing a cloud-based system.

Software security

Regarding software security, the following topics must be covered:

  • Keep software up to date
  • Scan for vulnerabilities
  • Scan for misconfigurations
  • Security is layered

<shameless plug>If you missed my previous post, some of these topics were covered in greater detail here: https://gofore.com/computer-security-principles/ </shameless plug>
Another often heard claim is  “Data is so sensitive that it cannot reside in the cloud
Right, so why is that computer connected to the Internet?
Everything is crackable and the firewall in front of the computer is just a teaser in the game. If the data is that sensitive, then it must be in an encrypted format. You’ve got this covered, right? I hope so!
For these kinds of best practices, AWS offers great tools:

  • Encrypted S3 storage (object storage)
  • A Systems Manager Parameter Store to encrypt all values going into a database
  • Key Management Service to automate key handling, including key rotation and audit trail

(to name just a few examples)
If a virtual machine is being run, one should be aware that Spectre and similar hardware vulnerabilities will pose a danger to some extent; at least in the cloud where resources are shared.
An evil-minded attacker’s virtual machine instance will need to be located in the same host machine in which the victim’s instance is running.
These kinds of vulnerabilities are patched very swiftly as soon as the fix is available. Especially since it poses a danger to the core business. Therefore these attacks are short-lived – unless a new zero-day exploit is found. And even then, the zero-day exploit must be applicable and:

  • Moderately quick to exploit to have benefit
  • Success rate must be fairly high and it must give enough permissions to control the needed resources

An improvement would be to use cloud-native components to handle load balancing, container orchestration, message brokering and so on.
Why? Because those are constantly audited by the cloud provider, therefore resulting in a smaller attack surface compared to handling the whole operating system and its software components (and their updates).
Copying an insecure application into the cloud doesn’t make it magically safer.
Regarding security standards, AWS complies with the following letter and number bingos:

  • SOC 1/ISAE 3402, SOC 2, SOC 3
  • FISMA, DIACAP, and FedRAMP
  • PCI DSS Level 1
  • ISO 9001, ISO 13485, ISO 27001, ISO 27017, ISO 27018,

These standards fulfil the requirements for Nasdaq, the US Department of Defence, and Philips Healthcare, just to mention a few high profile customers. These organisations take security seriously and have a huge budget for their security teams.
In the AWS Aurora database is a Maria/PostgreSQL-compatible relational database service (RDS) that offers automatic scaling and updates.
Major version updates can be done this way too, though it’s against best practices to upgrade without testing. You have been warned! That diminishes the burden of updates drastically.
The biggest cloud providers, namely Amazon, Google and Microsoft, have some of the most talented people in the field working on their products to keep their customers’ data secure. Compare this to on-premise scenarios where, in the worst cases, it’s a one-man show. If (s)he is not really interested in security, then it’s a security nightmare waiting to be unleashed.
Nothing protects faulty configuration choices in the cloud either, though some things are harder to make globally reachable by default.
In conclusion, cloud is not a new kid on the block anymore.
Learn your environment and implement with best practices.
Correctly configured cloud is secure and might save the administrator/DevOps/whatever from nightless nights.

You can learn more about gaining cloud certifications in our blog series starting here: https://gofore.com/en/getting-certified-on-all-cloud-platforms-part-1-introduction/

Ville Valkonen

Ville Valkonen

Ville is a System Specialist at Gofore. By day he is an infrastructure automation guru; by night, he zips up his hoodie and codes away focusing on security holes.

Do you know a perfect match? Sharing is caring

Computer security principles


One should keep in mind that there’s no such thing as perfect security. To put it another way, a 100% hack-safe systems do not exist. It’s all about the resources attacker(s) have, whether it is money, brain power, or equipment.
Be alarmed when you hear this bold claim next time. With a correctly designed system pursuing an attack becomes more expensive, hopefully expensive enough to make it non-profitable for the attacker to take it any further.
Security standards and best practices changes quickly and therefore a system built five years ago is not inevitably conforming to current standards.
So let’s look at some proactive measures that can be done to harden a system or code.

Minimize the attack surface

The attack surface is a weak spot which one can leverage to gain an unattended access into the system.
The attack surface can be a badly designed hardware that allows remote access to anyone with a weak default password, or a wireless access point with bad encryption.
Aim for a secure system that obeys security best practices per se.

Input validity

Input validation is one of the hardest things to get right in computer security. What if an unwanted input passes the mitigation filters? How to define invalid input?
Consider a situation where a web form contains a username and a password field. In this particular example, the username is fetched from a database. Since this is a naïve system, no input validation is performed.
Now, a malicious attacker inputs the following
'OR 1=1; /*'
in the username field and clicks the login button.
I highly doubt hers/his name is ‘OR 1=1; /*’.
Boom, the attacker is inside the system as a logged user without having an account. This attack is better known as an SQL injection attack.
Although the previous example may sound a bit far-fetched, this is a real-life problem.
Check MITRE’s TOP 25 most dangerous software errors.
Albeit the list is a bit outdated (from 2011), it’s still valid in many ways. Use programming language provided bound and prepared statements when possible. Don’t even think about concatenating a string from an user input and passing that to an SQL API directly!
By following the input handling procedure one could find out what can go wrong. What functions and variables are being touched along the way. Basically everything that touches the input stream can be marked tainted. Be very cautious when calling functions like cmd() or system() with user’s input. A better option is to use safe alternatives like exec-family functions and such like.
Always sanitize your inputs. No excuses.

KISS

Keep It Short & Simple / Keep It Simple Stupid. Don’t create complicated functions and structures which are hard to understand. There’s no simple definition of what is hard, and hard varies from person to person. These are commonoly considered as a good practices:

  • Avoid magic numbers
  • No hacky solutions
  • Consistent naming
  • Keep conditions simple

This is one of those things without a proper and exact answer.
Let’s have a real-life example.
Current implementation needs a feature X which is provided by a library L. The library L also provides a few thousand lines of extra code that will be included by the library. More lines of code, more bugs. Only the ratio varies, whether it is one bug per thousand lines of code or one bug per 10k lines and so on. So far no one has been able to dodge this universal formula. We are mere humans after all.
Therefore, among these extra lines of code there are likely to be bugs and some of these bugs might evolve into vulnerabilities. Add a few more libraries and the codebase becomes cumbersome to audit.
Another example would be not leaving ports open to the Internet. I can not think of a single scenario where database servers should be listening to public interfaces.
Think not once or twice, but thrice, what problem the library is going to solve.
Will it bring new problems and widen the attack vector?
Will it burden the update process?
Will it bring more dependencies?
Can it be more terse, hence copy & paste the required snippet?
Remember to check that the license used is permissive in this matter.

White and blacklists

Do you need to permit SSH from one host and deny for others?
Use whitelisting in an sshd config, or on a firewall.
In AWS S3 one could leverage white listing technique to grant access to certain files and for certain users.
Or do you need to block certain email spammers? Blacklist them by using spam filters and traps.
Black and whitelists play a role when sanitizing different kinds of inputs. A rule of thumb: rather explicit than implicit.

Fail safely

Ever seen a production web page or a program with a stack trace? Stack traces are valuable for developers but not for users. Not only is the stack trace useless for users, but it can also be malicious. At the worst it reveals passwords or discloses information about the running system and environment.
Logging errors into a file is the right thing to do here. The worst thing I’ve seen is to offer a command line prompt when a stack trace occurs. Don’t rely on your users to fix a broken system. They won’t. Hence, use logs, Luke.

In the worst case scenario, the aforementioned command prompt will give a user full access to the system with ‘super user’ access rights. I did not dig the example system any deeper but inevitably that will be the weakest link in that system. At least I really hope so!
Too often passwords represent the weakest link in many systems. A weak password means that it can be guessed with a brute-force attack over time. In a brute-force attack, one guesses passwords or passphrases with the hope to eventually get them right. Unfortunately several data breaches have shown this is still viable. Passwords like ‘123456’ and ‘password’ are still topping the lists as the most common passwords.
Better options would be to use public key authentication and/or multi-factor authentication.

The least privilege

Each component should be running with the least possible privilege. Consider running a web server as a root user and for a single process. What could go wrong? Now, if an attacker finds a vulnerability and is able to exploit it, (s)he will have access to the whole system with super user credentials. Okay, that has been fixed. The web server is happily running under a dedicated user. Now we have the web server running and we are happy.
Oh snap, our newly created dynamic web site with a user database doesn’t work and returns an ‘access denied’ message. After some debugging we notice that our sqlite database file doesn’t have the correct permissions.
Time to fix that with a good old `chmod 777 mydatabase.db` trick. Voilà, everything works!
After a week the site is compromised. What happened? The site had file uploads enabled and someone uploaded their own database, which overwrote the existing database. Whoops.
Or an alternative end: someone uploaded a `mytextfile.txt.php` and that had `chmod 777` rights set.
Whooops.
An example from AWS land is to use security groups and the ingress rules accordingly. A setup with a load balancer and a few EC2 instances should permit Internet traffic in via a load balancer and only from there.
Hence:
[works] Internet -> Load balancer -> EC2

[kaput] Internet -> EC2

Isolation and layering

Once one security measurement is lost, what will be the next catching layer? Are you relying on a single layer? Please don’t.
In networking this could mean distinct networks with Virtual LANs, users and customers, for example. Or in AWS, separating networks to private and public subnets. Now, if some service is badly configured and listening on a public subnet, it puts the host and the whole subnet in danger.
Listening on a private subnet doesn’t allow direct access from the Internet and can be configured with more fine-grained access control. In short, it’s much harder to accidentally open a service wide open to the Internet.
And to fix a common misconception, Docker is not a proper security solution. Even Docker developers are claiming it’s not suitable for serious security. So don’t count on that.

Cryptography

Don’t ever invent your own crypto – ever. Even cryptographers need to have their theorems peer reviewed and that’s a time-consuming process.
Don’t use weak ciphers which are known to have collisions, such as MD5 or DES. There are certain situations where both are still valid but don’t take risks.
Stick with widely adopted and battle tested ciphers.
Something that has been peer reviewed and known not to have collisions. Specially if that ‘something’ should be secret and not to seen by others.
When configuring webservers or loadbalancers, verify that the encryption methods used are sound with Qualys SSLabs scanner.

Secure by default

Use safe default values for each system and component. Don’t let users login without a password and rely on strong passwords.
If there’s an option in a compiler, in a program or whatever that improves the security, switch it on by default. In case that makes things bleed, fix, rinse and repeat. If it will be next to impossible to fix, then omit the option.
In some cases certain entities will need some gentle pushing. Without the gentle push we’d be still using ROT13 and its variations.
Use ansible-vault whenever secrets need to be shared publicly. Don’t ever commit plain text passwords, ever.
And most of all, use static code analyzing software to catch bugs as early as possible.
There are several options available, depending on the language.
Here are a few opensource options:
For C/C++:

  • llvm (clang) includes scan-build
  • valgrind
  • CppCheck
  • SonarQube

Java:

  • SonarQube

PHP:

  • SonarQube

Conclusion

In conclusion, some concrete day-to-day examples that improve your security.
Mostly for *nix.

Fairly often I’ve seen users running Docker via sudo.
This is unnecessary.
Add yourself into a docker group and relogin:
usermod -aG docker $(whoami)

Running Wireshark as root?
Stop it.
Here’s a nickel, kid:
usermod -aG wireshark $(whoami)

Ansible

  • Create an encrypted playbook

ansible-vault create myplaybook.yml

  • Encrypt an existing playbook

ansible-vault encrypt myplaybook.yml

  • Edit encrypted myplaybook.yml

ansible-vault edit myplaybook.yml

  • View encrypted myplaybook.yml

ansible-vault view --ask-vault-pass myplaybook.yml

  • Decrypt the playbook

ansible-vault decrypt myplaybook.yml

Ville Valkonen

Ville Valkonen

Ville is a System Specialist at Gofore. By day he is an infrastructure automation guru; by night, he zips up his hoodie and codes away focusing on security holes.

Do you know a perfect match? Sharing is caring