Computer security principles

One should keep in mind that there’s no such thing as perfect security. To put it another way, a 100% hack-safe systems do not exist. It’s all about the resources attacker(s) have, whether it is money, brain power, or equipment.
Be alarmed when you hear this bold claim next time. With a correctly designed system pursuing an attack becomes more expensive, hopefully expensive enough to make it non-profitable for the attacker to take it any further.
Security standards and best practices changes quickly and therefore a system built five years ago is not inevitably conforming to current standards.
So let’s look at some proactive measures that can be done to harden a system or code.

Minimize the attack surface

The attack surface is a weak spot which one can leverage to gain an unattended access into the system.
The attack surface can be a badly designed hardware that allows remote access to anyone with a weak default password, or a wireless access point with bad encryption.
Aim for a secure system that obeys security best practices per se.

Input validity

Input validation is one of the hardest things to get right in computer security. What if an unwanted input passes the mitigation filters? How to define invalid input?
Consider a situation where a web form contains a username and a password field. In this particular example, the username is fetched from a database. Since this is a naïve system, no input validation is performed.
Now, a malicious attacker inputs the following
'OR 1=1; /*'
in the username field and clicks the login button.
I highly doubt hers/his name is ’OR 1=1; /*’.
Boom, the attacker is inside the system as a logged user without having an account. This attack is better known as an SQL injection attack.
Although the previous example may sound a bit far-fetched, this is a real-life problem.
Check MITRE’s TOP 25 most dangerous software errors.
Albeit the list is a bit outdated (from 2011), it’s still valid in many ways. Use programming language provided bound and prepared statements when possible. Don’t even think about concatenating a string from an user input and passing that to an SQL API directly!
By following the input handling procedure one could find out what can go wrong. What functions and variables are being touched along the way. Basically everything that touches the input stream can be marked tainted. Be very cautious when calling functions like cmd() or system() with user’s input. A better option is to use safe alternatives like exec-family functions and such like.
Always sanitize your inputs. No excuses.

KISS

Keep It Short & Simple / Keep It Simple Stupid. Don’t create complicated functions and structures which are hard to understand. There’s no simple definition of what is hard, and hard varies from person to person. These are commonoly considered as a good practices:

  • Avoid magic numbers
  • No hacky solutions
  • Consistent naming
  • Keep conditions simple

This is one of those things without a proper and exact answer.
Let’s have a real-life example.
Current implementation needs a feature X which is provided by a library L. The library L also provides a few thousand lines of extra code that will be included by the library. More lines of code, more bugs. Only the ratio varies, whether it is one bug per thousand lines of code or one bug per 10k lines and so on. So far no one has been able to dodge this universal formula. We are mere humans after all.
Therefore, among these extra lines of code there are likely to be bugs and some of these bugs might evolve into vulnerabilities. Add a few more libraries and the codebase becomes cumbersome to audit.
Another example would be not leaving ports open to the Internet. I can not think of a single scenario where database servers should be listening to public interfaces.
Think not once or twice, but thrice, what problem the library is going to solve.
Will it bring new problems and widen the attack vector?
Will it burden the update process?
Will it bring more dependencies?
Can it be more terse, hence copy & paste the required snippet?
Remember to check that the license used is permissive in this matter.

White and blacklists

Do you need to permit SSH from one host and deny for others?
Use whitelisting in an sshd config, or on a firewall.
In AWS S3 one could leverage white listing technique to grant access to certain files and for certain users.
Or do you need to block certain email spammers? Blacklist them by using spam filters and traps.
Black and whitelists play a role when sanitizing different kinds of inputs. A rule of thumb: rather explicit than implicit.

Fail safely

Ever seen a production web page or a program with a stack trace? Stack traces are valuable for developers but not for users. Not only is the stack trace useless for users, but it can also be malicious. At the worst it reveals passwords or discloses information about the running system and environment.
Logging errors into a file is the right thing to do here. The worst thing I’ve seen is to offer a command line prompt when a stack trace occurs. Don’t rely on your users to fix a broken system. They won’t. Hence, use logs, Luke.

In the worst case scenario, the aforementioned command prompt will give a user full access to the system with ’super user’ access rights. I did not dig the example system any deeper but inevitably that will be the weakest link in that system. At least I really hope so!
Too often passwords represent the weakest link in many systems. A weak password means that it can be guessed with a brute-force attack over time. In a brute-force attack, one guesses passwords or passphrases with the hope to eventually get them right. Unfortunately several data breaches have shown this is still viable. Passwords like ’123456’ and ’password’ are still topping the lists as the most common passwords.
Better options would be to use public key authentication and/or multi-factor authentication.

The least privilege

Each component should be running with the least possible privilege. Consider running a web server as a root user and for a single process. What could go wrong? Now, if an attacker finds a vulnerability and is able to exploit it, (s)he will have access to the whole system with super user credentials. Okay, that has been fixed. The web server is happily running under a dedicated user. Now we have the web server running and we are happy.
Oh snap, our newly created dynamic web site with a user database doesn’t work and returns an ’access denied’ message. After some debugging we notice that our sqlite database file doesn’t have the correct permissions.
Time to fix that with a good old `chmod 777 mydatabase.db` trick. Voilà, everything works!
After a week the site is compromised. What happened? The site had file uploads enabled and someone uploaded their own database, which overwrote the existing database. Whoops.
Or an alternative end: someone uploaded a `mytextfile.txt.php` and that had `chmod 777` rights set.
Whooops.
An example from AWS land is to use security groups and the ingress rules accordingly. A setup with a load balancer and a few EC2 instances should permit Internet traffic in via a load balancer and only from there.
Hence:
[works] Internet -> Load balancer -> EC2

[kaput] Internet -> EC2

Isolation and layering

Once one security measurement is lost, what will be the next catching layer? Are you relying on a single layer? Please don’t.
In networking this could mean distinct networks with Virtual LANs, users and customers, for example. Or in AWS, separating networks to private and public subnets. Now, if some service is badly configured and listening on a public subnet, it puts the host and the whole subnet in danger.
Listening on a private subnet doesn’t allow direct access from the Internet and can be configured with more fine-grained access control. In short, it’s much harder to accidentally open a service wide open to the Internet.
And to fix a common misconception, Docker is not a proper security solution. Even Docker developers are claiming it’s not suitable for serious security. So don’t count on that.

Cryptography

Don’t ever invent your own crypto – ever. Even cryptographers need to have their theorems peer reviewed and that’s a time-consuming process.
Don’t use weak ciphers which are known to have collisions, such as MD5 or DES. There are certain situations where both are still valid but don’t take risks.
Stick with widely adopted and battle tested ciphers.
Something that has been peer reviewed and known not to have collisions. Specially if that ’something’ should be secret and not to seen by others.
When configuring webservers or loadbalancers, verify that the encryption methods used are sound with Qualys SSLabs scanner.

Secure by default

Use safe default values for each system and component. Don’t let users login without a password and rely on strong passwords.
If there’s an option in a compiler, in a program or whatever that improves the security, switch it on by default. In case that makes things bleed, fix, rinse and repeat. If it will be next to impossible to fix, then omit the option.
In some cases certain entities will need some gentle pushing. Without the gentle push we’d be still using ROT13 and its variations.
Use ansible-vault whenever secrets need to be shared publicly. Don’t ever commit plain text passwords, ever.
And most of all, use static code analyzing software to catch bugs as early as possible.
There are several options available, depending on the language.
Here are a few opensource options:
For C/C++:

  • llvm (clang) includes scan-build
  • valgrind
  • CppCheck
  • SonarQube

Java:

  • SonarQube

PHP:

  • SonarQube

Conclusion

In conclusion, some concrete day-to-day examples that improve your security.
Mostly for *nix.

Fairly often I’ve seen users running Docker via sudo.
This is unnecessary.
Add yourself into a docker group and relogin:
usermod -aG docker $(whoami)

Running Wireshark as root?
Stop it.
Here’s a nickel, kid:
usermod -aG wireshark $(whoami)

Ansible

  • Create an encrypted playbook

ansible-vault create myplaybook.yml

  • Encrypt an existing playbook

ansible-vault encrypt myplaybook.yml

  • Edit encrypted myplaybook.yml

ansible-vault edit myplaybook.yml

  • View encrypted myplaybook.yml

ansible-vault view --ask-vault-pass myplaybook.yml

  • Decrypt the playbook

ansible-vault decrypt myplaybook.yml

Ville Valkonen

Ville Valkonen

Ville toimii Goforella järjestelmäasiantuntijana. Työssään Ville automatisoi asioita niin paljon kuin mahdollista, lähinnä infraan liittyviä tehtäviä. Vapaa-aikanaan Ville koodaa, myös tietoturva on lähellä Villen sydäntä.

Piditkö lukemastasi? Jaa se myös muille.

In this blog post, I’m going to tell you about our design for an electric bicycle user interface, implemented using Qt Company’s software development kit. I’m going to go through the case with you and share some insights that we gained from the experience, especially in terms of how to design user interfaces with the very low performance (some might say cost effective) hardware in mind.
You can also watch a video of the presentation I gave about the subject at the Design Jyväskylä Meetup on January 17th (below).

The Case
For CES 2018, the Qt company wanted to demonstrate that an independent design and development team could effectively create an electric bicycle user interface that would be user friendly and also visually good looking, on entry level touch screen hardware. All of the UI had to run at 60 frames per second, on hardware that doesn’t have a dedicated GPU.
The Team
We took this demanding project as a creative challenge and put together a team of 2 UX-designers, 1 visual designer, and 1 developer, with skillsets complementing each other. The span of the project was only 8 to 10 weeks, which meant that we needed to work very efficiently and in a lean manner to achieve a great result.
The Hardware
The target hardware used was a very entry level resistive 5,7 inch touch screen, and a very low performance Toradex board with an integrated graphics controller. The development board in question had only a low amount of memory, which in addition to lacking a dedicated GPU made things tricky.
At this point someone might say: ”60 frames per second, on a device that doesn’t have a dedicated GPU?! Is that even possible?”
It turns out that, with the right technology, people and knowhow; yes it is.
How the project went?
Initially we created 2 separate concepts with the target hardware in mind, in only a couple of days. After a review by Qt Company, we ended up picking the best parts of both concepts. We decided to go forward with a concept that incorporates a large main dial that sums up the most important indicators an e-bike user needs to have while riding and also a menu interface that the user uses by pushing the different quarters of the screen. The focus of the design was on ease of use and clarity of interaction on a not-so-sensitive resistive touch screen.
 

Each week, we would refine and further develop the initial concept. We would do development tests and visual designs straight away and then try to implement them on the target hardware and see, would it match our standards and would it run at 60fps. We also did some user interviews and testing on actual e-bike users during to project, in order to make sure that the final product would cater to their needs.
The final product can be seen in more detail on this short demonstration clip:
 

Insights for designing for low end hardware
The design methods or overall process doesn’t differ that much from normal when designing for low end hardware, but in our opinion a couple of things are more emphasised. Here are a couple of things that we learned:
Prototype and test early on the target hardware – Fail fast!

  • Even though you can run your build on a desktop environment and get some sense of your design in action, the true insights on how your design works only come when you play around with it on the actual hardware.
  • The look and feel is totally different (and is dependant on the hardware) and it reveals certain things that aren’t apparent in desktop mode. For example, colour presentation, contrast, usability with touching etc.

Prefer using images of UI-elements rather than drawing elements with the hardware.

  • Qt can draw circles and squares to some extent, but more complex UI elements should be done with images to achieve a visually good looking UI that isn’t overly demanding for the hardware that doesn’t have a dedicated GPU and a low amount of memory.

Forget about dynamic transparency, fades etc for the most part

  • For example, On/off type transparency is mostly ok, but dynamically changing effects, other than moving static UI elements around with anchors, is not recommended in terms of performance.
  • Avoid changing the whole view at once, prefer changing only smaller parts at once for smooth transitions.

If you take away only one thing from this blog post, I think it should be the first point. One can not stress enough the importance of constant testing and prototyping on the actual target hardware, in realistic use contexts. The “Fail fast” principle applies to many different disciplines ranging from business to software design, and in our opinion, it applies strongly to designing for low performance hardware as well. So test a lot and prototype early in order to fail fast – it will make your design better.

Avatar

Olli Pirttilä

Olli toimii Goforella UX designerina. Töissään Olli muotoilee palveluja sekä suunnittelee käyttäjäkokemuksia ja käyttöliittymiä erilaisiin digitaalisiin palveluihin ja tuotteisiin. Ollin kiinnostuksen kohteita ovat erityisesti uudet teknologiat, new media ja videotuotanto.

Piditkö lukemastasi? Jaa se myös muille.

Aina ei pidä olla niin vakavaa

Mitä jos seuraavan sisäisen sprintin nimi olisikin ’Pilvibunkkeri’? Mitä jos tilattaisiin porukalle tiimiverkkarit ja naamioverkkoa. Poistuttaisiin omalta fyysiseltä mukavuusalueelta ja siirryttäisiin viikoksi toiseen kerrokseen.

Laitettaisiin tilaukseen vielä tiimimukit ja pyyhkeet, koska mopo jo keulisi niin miksipäs ei.

Mitä jos ei kysyttäisi keltään lupaa. Kehdattaisiin vaan firman luottokortilla. Mitä jos yllätettäisiin tiimi maanantai-aamuna.
Mitään en kadu, paitsi tiimin ravinnontarpeen törkeää aliarvioimista.

Haluaisitko liittyä joukkoomme? Meillä tiukkapipoisuus ei ole arkipäiväistä – hassutteleminen on sallittua, jopa toivottavaa. Katso avoimet työpaikat.

Tero Vepsäläinen

Tero Vepsäläinen

Tero on ops-tyyppi, coach ja palvelupäällikkö. Hän vastaa Gofore Cloudin operatiivisesta toiminnasta ja tykkää pitää kätensä savessa pilvinatiivien järjestelmien suunnittelun ja toteutuksien parissa.

Piditkö lukemastasi? Jaa se myös muille.

Paikalla pysyen mikään ei kasva


Minkälaista on osaamisen kehittäminen Suomen parhaassa työpaikassa? Pääseekö Goforella aidosti kehittymään ja minkälaisiin asioihin työaikaa voi käyttää?
Näitä kysymyksiä kuulen esitettävän säännöllisin väliajoin. Yleensä kysyjä on joko Goforesta työnantajana kiinnostunut työnhakija tai vastaavassa osaamisen kehittämisen vastuullisen roolissa työskentelevä henkilö toisessa organisaatiossa.
Me autamme löytämään henkilökohtaisen innostumisen ja oman suunnan, sekä tarjoamme kehittymiselle kasvualustan ja laaja-alaisesti erilaisia mahdollisuuksia, jotka onnistuvat työajan puitteissa. Osaamisen kehittämisen vastuuhenkilö puolestaan yleensä hämmästyy saamastaan vastauksesta. Yksilöllisen kehittymisen ja innostumisen nimeen vannova  systeemimme on edelleen vähemmistössä perinteisempien osaamisen kehittämisen toimintamallien hallitessa kenttää.
Jatkuva uuden oppiminen ja toiminnan kehittäminen ovat välttämättömiä kaikilla toimialoilla. Meidän oivalluksemme on kuitenkin ehdottomasti ollut yksilöllisen kehittymisen toimintamalli, joka on keskeisin rakennuspalikka jatkuvan oppimisen kulttuurissamme: innostuneet ja jatkuvasti kehittyvät goforelaiset levittävät osaamistaan ja kokemuksiaan työkavereilleen ja tarttuvat yhdessä epäkohtiin korjaten niitä ennakkoluulottomasti. Aidosti toimiva oppimisen kulttuuri toteutuu tällöin luontaisella tavalla ja ilman taikatemppuja.
Kulttuuriimme kuuluu samalla myös rohkeus toimia toisin sekä kyky hyväksyä ajoittainen kaaos, lievä epävarmuus ja epäjärjestys. Epätäydellisyys on todellisuudessa se taikapöly, joka luo pohjan jatkuvalle kehittymiselle ja parantamiselle. Kannustamalla goforelaisia jatkuvaan oppimiseen ja parantamiseen, rakennamme samalla tulevaisuuden kilpailukykyä myös Goforelle.

Valmennuksia, konferensseja ja koodausprojekteja

Viime vuonna Goforella käytettiin työaikaa itsenäiseen oman osaamisen kehittämiseen yhteensä lähes 12 000 tuntia*. Työpäivinä tämä on lähes 1 600 työpäivää. Näin ollen keskiverto goforelainen käytti työaikaa oman osaamisensa kehittämiseen noin kuuden työpäivän verran vuodessa. Tämä voi tuntua nopealla tarkastelulla vähäiseltä. On kuitenkin hyvä muistaa, että oman osaamisen kehittäminen tarkoittaa meillä itsenäistä perehtymistä uusiin teknologioihin sekä vaikkapa valmennuksiin, konferensseihin ja meetuppeihin osallistumista, tai omaa koodausprojektia – henkilön oman tilanteen ja kehittymistarpeen mukaan. Tarkemmin lukuihin paneutuessani huomasin, että joku työkavereistani käytti viime vuonna puolikkaan päivän omaan kehittymiseen, toinen kolmetoista. Mikä nyt oli henkilön omaa kehittymispolkua tukeva tapa toimia. Tarkoituksenmukaisuus toteutuu meillä mielestäni erinomaisesti. Monet goforelaiset ovat osaamista varsin kokonaisvaltaisesti kehittävissä projekteissa ja työtehtävissä, jolloin työ itsessään vie henkilöä kohti omaa tavoitetta. Erillistä kouluttautumista tai muuta osaamisen päivittämistä ei tällöin edes tarvita.
Osaamisen jakamista meillä tehtiin reilun 1 000 tunnin verran. Käytännössä tämä tarkoittaa kiltatoimintaa, sisäisiä valmennuksia ja tietoiskuja: näiden valmistelua sekä toteuttamista. Tästä porukasta saa olla ylpeä. Mielestäni tämä on hyvä osoitus vahvasta oppimisen kulttuuristamme. Tänä vuonna satsaammekin erityisesti osaamisen jakamista tukevien toimintatapojen kehittämiseen, sillä olemme vuodessa lähes tuplanneet henkilöstömäärämme ja haluamme asioiden toimivan jatkossakin hyvin. Jatkamme myös edelleen erinomaisia osallistujapalautteita keränneitä sisäisiä valmennuksiamme: mm. lean-ajattelua, vaikuttamista, esiintymistä ja kirjoittamista.
Osaaminen on meillä arvossaan. Osaaminen on se, mitä myymme asiakkaille. Työntekijöiden osaamisen kehittämiseen panostaminen kannattaa, sillä paikalla pysyen mikään ei kasva – vähiten Gofore. Odotan innolla sitä, mitä tämä vuosi tuo tullessaan oppimisen ja kehittymisen kulttuuriimme ja toimintatapoihimme. Awesome times ahead!
*) Luvut eivät pidä sisällään Gofore Oyj:n tytäryhtiöitä.

Avatar

Heini Ala-Vannesluoma

Heini on Goforen Lead Coach & People Development Consultant, jonka vastuualueena on osaamisen ja sisäisen coach-toiminnan kehittäminen.

Piditkö lukemastasi? Jaa se myös muille.