The Finnish National Agency for Education’s Studyinfo is an online portal that allows users to search for study programmes leading to a degree in Finland and apply for studies online. The portal also serves as a seamless collection of additional services offered for Finnish education providers.
Studyinfo is used by over 300,000 learners and over 10,000 officials annually. The portal encompasses dozens of back end systems, requiring support from modern infrastructure. Originally built in a traditional server room, the National Agency for Education wanted to renew the service and transfer it over to a cloud-based platform. Gofore was the chief technical contractor in the multi-supplier project.
“The Finnish National Agency for Education had been searching for a more cost-efficient and modern capacity solution for modernising its Studyinfo service for a while. During this search we reviewed and compared several cloud-based options. The one that we found to best suit our needs was Amazon Web Services (AWS). AWS is well known throughout the world and there were a good number of experts specialising in it in Finland, which was very important for us,” says Chief Information Officer Erja Nokkanen from the Finnish National Agency for Education.

Major transfer carried out without a hitch

The Studyinfo service was successfully transferred over to the AWS cloud platform in February 2018. The transfer project was challenging, as the system, which was in constant use, is composed of several interdependent back end systems and offers dozens of external integrations to other organisations, such as Kela.
“The cloud project was launched in August 2017 in collaboration between the Finnish National Agency for Education, Gofore and the National Agency for Education’s service developers. The six-month project was carried out in an agile and flexible manner, allowing the actual transfer to be conducted smoothly and without any problems,” state Erja Nokkanen and Senior Adviser Mika Rauhalafrom the Finnish National Agency for Education.
With its transfer to AWS, Studyinfo became a pioneer among major public administration systems when it comes to the technical solutions of the future. The cloud-based service was immediately put to the test in spring 2018 when tens of thousands of applicants used the service to submit their applications to upper secondary schools and higher education institutions as part of the national joint application procedure.

What kind of benefits does the new infrastructure offer?

  1. Flexible development
    The flexibility provided by the cloud platform allows developers to easily create multiple development environments. This makes it easy to add new applications and incorporate new technologies to the system through the utilisation of the platform’s existing components or the development of new ones.
  2. Scalability
    Applications can be automatically scaled both up and down according to load: up whenever more performance is needed, and down to provide cost savings. The cloud platform allows such changes to be carried out within minutes instead of days or weeks.
  3. Cost-effectiveness
    The costs of administering cloud-based infrastructure are lower than those associated with traditional infrastructure. The service platform takes care of many tasks that need to be handled manually in traditional server rooms. The costs associated with development environments are also based on usage. If a development environment is not currently used, for example, it can be shut down and booted up again when needed.
  4. Infrastructural reproducibility
    A cloud platform enables software-based infrastructure management. This way the entire system is always documented, available to all relevant parties and easily reproducible. The system’s revision history is also easier to access.
  5. DevOps
    The system is no longer a black box, as the infrastructure is accessible to everyone. Developers are responsible for their output throughout the entire lifecycle of the software, and can make quick changes to the system when necessary. Bringing the infrastructure closer to developers and product owners like this saves both time and money.
Avatar

Ville Seppänen

Ville works to promote agile software development culture in Gofore and helps customers to fulfill their wildest dreams. He also works with cloud consulting.

Do you know a perfect match? Sharing is caring

The ongoing digital transformation is changing the operating logic and operating environment of organisations, as a result of which customer-orientation and the ability to offer a good customer experience are becoming increasingly vital success factors. This is undoubtedly one of the reasons why so many organisations have started to incorporate customer-orientation and providing a good customer experience into their stated values, in addition to making them strategic operational objectives.
The operations of an organisation are steered – or at least should be steered – by its strategy. The strategy, in turn, is shaped by both the organisation’s values and its view of human beings, i.e. how the organisation views its employees and customers. The importance of these two cornerstones should not be forgotten in our increasingly digital world, even as we start to reshape our operating methods. After all, an organisation’s values are the foundation on which its customer experience is built.

The systemic view – what does it mean?

Due to the systemic nature of organisations, all the parts that make up an organisation are intrinsically interconnected. What this means in practice is that every change caused by the digital transformation inevitably affects not only the management of an organisation, but also its structures, processes, expertise and recruitment needs as well as its work and organisational culture – and its customer experience.
While carrying out its own change process, an organisation must also make sure to maintain the excellence and constantly improving quality of its customer experience. However, in order to provide customer value in the present, constantly changing environment, more and more organisations must also be able to serve customers through digital channels. The ability to respond to this challenge can vary considerably between different organisations. Overcoming the situation may require some organisations to even undergo some rather painful procedures, while others may be able to navigate the changing tides with ease, hardly rocking the boat at all.

A good customer experience is not created in a vacuum…

Despite acknowledging the importance of customer-orientation, the development of many organisations seems to be characterised by an excessively inward-looking approach. That is to say, organisations will surprisingly often implement ongoing development measures as if they existed in a vacuum, which can lead an organisation to develop the operating models and processes considered vital to the customer experience based exclusively on information and needs arising from within the organisation itself.
This kind of approach is of course quite understandable, especially in sectors where competition over skilled employees is high, since in these sectors a positive employer image is a vital factor and condition for success. Even so, the customer should never be entirely forgotten, as focusing exclusively on internal development work can also lead to negative consequences in terms of the customer experience.

… Instead, a truly customer-oriented organisation is born though dialogue

Organisational design helps us develop an organisation in a comprehensive manner, so that all of its functions synchronously support not only the change taking place in different areas, but the customer experience provided by the organisation as well.
In addition, organisational design aims to prepare the organisation for the impacts that organisational changes may have. Such impacts can include the realignment of the organisation in relation to the market situation, operating environment, partners, competitors or customers, for example.
Comprehensive organisational design must always focus on also examining the world outside the organisation and the requirements that the outside world imposes on the organisation’s success. Furthermore, organisational design that strives for a good employee and customer experience requires focusing not only on operational (business) objectives, but on people as well. Ensuring a seamless and customer-oriented operation requires collecting information on the wishes, expectations, joys and woes that people experience when working in the organisation, as well as the wishes, expectations, joys and woes that people face when they interact with the organisation as customers and navigate its service environment.

 

Methods of organisational design 

The methods of organisational design employed should always be selected based on the present change needs and situation. Here are some examples of the design tools that we utilise at Gofore:

  • management consultation and sparring
  • business design
  • service design
  • data analytics
  • cultural consultation

Another tool that we consider important in regard to organisational design is research based on quantitative, qualitative and mixed methods, as well as the analysis of the results of said research. Research, along with co-creation and other participatory methods, is a vital part of organisational design work, providing us with information on human behaviour, which is crucial for development. Armed with this information, we can start working together with the organisation’s representatives on how best to approach past or ongoing changes within the organisation. At the same time, we can start preparing concrete plans on how and in what kind of timeframe we should promote development from the perspective of improving the customer experience.

Customer experience is always a subjective phenomenon, the quality of which can be influenced by means of organisation design. That is why it is worth asking whether it is possible to scale something so deeply rooted in individual experience. And if it is, how do we go about it? These questions will be answered in the next blog post, which will focus on the scaling of the customer experience.

[gap height=”40px”]

Interested in finding out more?

Just tell us how we can reach you, and we’ll get back to you.

[contact-form-7 id=”30143″ title=”Personal contact form, English” destination-email=”soile.roth@gofore.com” cc-email=”sales@gofore.com”]

Soile Roth

Soile Roth

Soile works at Gofore as the head of Gofore's Design business and as an expert on organisation design and business design. Soile has extensive experience in account management and improving the customer experience, accumulated at both Finnish and global companies. As for education, Soile holds master's degrees in Education and Social Sciences, in addition to which she is a Certified Business Coach and a Certified Master Supervisor and Coach of Leaders and Executives. Soile is currently working on a doctoral thesis on management development.

Linkedin profileTwitter profile

Do you know a perfect match? Sharing is caring

Innovations in communications and artificial intelligence are fast changing the ways in which we learn and interact with one another. Paper is disappearing from our homes and offices, automation is gradually replacing repetitive and dangerous tasks, and people are becoming ever more connected and empowered through the use of the internet.

It wasn’t always this way

New technologies are often based on the cumulative effort and discoveries of many individuals and organisations, all trying to push the boundaries. These projects dare to be different. They innovate.
As a species, innovation is what drives us forward. This began with humans learning to use tools. We kept refining materials and resources, just as we continue to do so today. In its elementary form, innovation is about solving a problem in a new way. It’s about forward thinking — staying ahead of the curve. Many of the most successful products and services in existence were fuelled by innovative thinking, and it’s made some people a lot of money along the way!
Innovation sounds great right? Well, there is a darker side that’s less talked about — failure. Failure is an inherent risk associated with innovation. This can put many off the idea of trying something different or new. It can be unsettling for anyone, dealing with so many unknowns. Will it work? Will people use it? What happens if it fails? In large organisations, innovation culture tends to be dictated by senior management. There are some organisations with entrepreneurial cultures, encouraging new ideas to develop and ‘fail fast’. At the other end of the spectrum, however, there are organisations that encourage a risk-averse culture, with ‘red tape’ in place to avoid any unnecessary risks. Although the latter organisations reduce the risk of a failing project, they’re potentially opening themselves up to a much larger risk — getting left behind.
we are hiring

Without innovation, we’re stuck with ‘good enough’

As technology progresses, so do our capabilities. We need to take risks to move forward. Without risks, we merely stagnate, proclaiming that what already exists is ‘good enough’. We’re left patching up old problems, working around issues instead of having the courage and ambition to find a better solution. In time, products and services will move on, and risk-averse organisations are left wondering why there’s no longer a market for what they’re offering. Innovative companies almost certainly have the edge in this respect. Fortunately, it is possible to mitigate some of the risk involved with innovation — simply by doing your research.
At the end of the day, your customers will be the ones who are going use your product. Therefore, you should always involve your customers in the design and implementation of your products and services. To be confident that an idea is worth pursuing, it’s important to understand whether there is an actual need for it, whether users are able to use it, and perhaps most importantly, whether the product or service provides a good experience for the user. User experience design is a very useful way of understanding customers, allowing you to build novel solutions, whilst also considering the needs and desires of users. With a strong knowledge of your users, new products and services are far more likely to succeed.
Stop stagnating. Start innovating. Listen to your users, and start delivering brilliant new solutions.

Avatar

Liam Betsworth

Do you know a perfect match? Sharing is caring

Imagine it’s 4 in the morning and you’ve already been working for about 28 hours, short naps here and there. Your teammate approaches you and asks, “Shouldn’t we all get some proper sleep?”. You consider, after all you feel deadly tired, but when you look at the clock that sudden injection of adrenaline makes you awake again, there are only 8 hours left to deliver the project!
That’s how I felt on the second night of this Hackathon. If you’ve never been to a Hackathon, it’s an event at which you only have two days to create a whole new product/service/business from scratch and you’re expected to have at least a working prototype at the end.
Said to be the largest Hackathon in Europe, Junction had the confidence to invade Asia and set up an event in the heart of the land of the rising sun, Tokyo, Japan! And I was there, at Junction Tokyo!

How did that happen?

That’s the kind of great and unique experience that can happen when you work at Leadin! When I saw the possibility of going to Tokyo and being part of this massive tech Hackathon, I imagined how cool it would also be for Leadin to have someone in Japan for a couple of days, participating in something big, making interesting contacts, and bringing back some fresh knowledge to the team. Guess what? The guys at the top also loved the idea and sponsored me! Yay!
This edition of Junction had three tracks: Sustainable Development, by iamtheCODEDMM, and SaharaSparks; Logistics and Storage, by Terrada; and Robotics, by SoftBank. Also IBM had a special challenge that could be combined with any of the tracks, with IBM’s Cloud Platform, Bluemix.
In a multidisciplinary team, along with two Japanese and three Thai, I embarked on a combination of the Robotics track and the Bluemix challenge, and we worked with Pepper, the super friendly SoftBank’s humanoid robot.

picture credit: Junction Tokyo

Changing the way people work

With the original challenge of “how can we change the way people work?” we have created an office buddy. Pepper would be responsible for arranging people’s schedules, propose different times for appointments, and be a friendly buddy to create a more relaxed and fun environment in the workplace.
At the end of the two days, the main working functionality we created with Pepper was the ability to book meetings with a voice command, say for example “Pepper, I’d like to have a meeting with Jake and Jane, on May 5 from 10 to 11”.
Under the hood, the voice would be recorded by Pepper and sent to IBM Watson’s Speech to Text API, to be turned into a text, processed in a series of scripts run in the Bluemix cloud with Node-RED, and turned into an HTTP request to the backend, which would finally send a message via Socket.io to the user interface to update the schedule in real time. Phew! I really couldn’t imagine we would achieve all of this in such a short time.

Coffee, energy drinks and pizza

Some more technicalities in case you are interested (sorry, I’m an Engineer, I can’t help), SoftBank’s Choregraphe was used to create nice interactions and answers from Pepper (like quoting Star Trek while the whole processing was happening), both the backend in Python and the frontend in ReactJS were hosted under different domains at Bluemix, and the frontend had an automated process to build and deploy as soon as a new commit was detected in GitHub. A kind of Frankenstein of technologies, but with such a short time everyone ended up using something that is more familiar, and then we figured out a way to integrate everything in the end.

picture credit: Junction Tokyo

In short, plenty of coffee, energy drinks, pizza, back pain, sleeping on a beanbag, going for a walk under the Sun or in the middle of the night to get some fresh air and stretch, a surprise yoga session in the morning (I don’t remember which morning anymore), learning a couple of words in Japanese, meeting and working with great people, programming a robot, and a lot more! I just don’t have enough words to describe how amazing this opportunity was.
Thanks to Leadin I had a blast in Japan! ?
This post was written by Fabiano who attended the Junction hackathon in Tokyo in April 2017 Picture credits: Junction Tokyo http://tokyo.hackjunction.com/

Avatar

Gofore <3 Leadin

Gofore and Leadin announced their plans to merge in 5/2017

Avatar

Fabiano Brito

Do you know a perfect match? Sharing is caring

“Exhale as you step your feet apart and turn your foot 90 degrees and reach up for the sky,” this is something you might hear at the Leadin UK office.

We all know exercising is good for us, but often it’s side-lined by excuses of being too busy. In fact, it’s statically proven that people who exercise regularly have more energy. Doing team exercises is one of the initiatives from our team in the UK to enhance the office environment. Nowadays one might find us doing yoga poses, or simple stretches or even doing a plank to the song Roxanne.

Exercising in an office environment is well-known in countries such as Japan. The concept derives from Samurai warriors, whose two skills were of utmost importance; swordsmanship and calligraphy. These skills were to be mastered over a long time and one would not work well without the other. If one stopped training, one would rapidly lose one’s skill.

Now we may not be perfecting our swordsmanship, but exercising is known to train the brain to be more flexible and boosts creativity. Even though it may only take around 5 minutes of exercise at the office, it strengthens the team building and motivates productivity. Our exercising is also contagious – one day last week a senior member of one of our International clients decided to join in – I bet he never thought he would be doing a plank before his meeting! It’s noticeable that the office has become even more of an open environment that promotes communication. Another benefit is that we learn about new exercises as well, as we take turns to lead the exercises.

It’s often the simple things such as this that can make the biggest impact in a workplace. Team building can come in many forms, but when you think of it, it really just takes some creativity and a team willing to participate.

We continually look for more things to improve the work culture. Occasionally we even hold themed luncheons where we bring in food to share after our exercises of course. 😉

You can check out some of our exercises photos on our social channels: facebook and twitter.

Avatar

Gofore <3 Leadin

Gofore and Leadin announced their plans to merge in 5/2017

Avatar

Karman Wong

Having worked in multiple countries, Karman who is originally from Denmark, is now a principal designer (and exerciser) based in our Swansea office.

Do you know a perfect match? Sharing is caring

What is an Idempotent Consumer (aka Idempotent Receiver) and why should you consider it as a friend? First, it’s an Enterprise Integration Pattern (EIP) as the title suggests. Second, it takes care of handling duplicate messages that may travel between different systems. This blog post reveals a couple of real-life problems that can be solved using the Idempotent Consumer pattern and provides some technological insight on the implementation side.

Why do we need to handle duplicate messages?

If (when) communication relies on unreliable protocols e.g. over the Internet using HTTP message delivery can only be guaranteed by sending the message again until the sender receives the message acknowledgement. Thus, sending duplicate messages is inherent in the communication protocol itself. Suppose a B2B case where application A sends an order request to application B but doesn’t get an acknowledgement. It doesn’t know whether B received the message or not, so it decides to send the same message again. If there’s no duplicate message handling implemented, application B gets two orders instead of one.
Another point of failure lies in the area of distributed transactions in case all the parties involved are not able to participate in the distributed two-phase commit. If a message is sent to two applications but only one of them processes it successfully the state is inconsistent. In case both receiving applications do handle duplicate messages the message can be sent again. The application that already processed the message successfully can just ignore the resent message while the other one gets a second chance to get things right.

Step into the Camel route

Apache Camel is an open source Java framework that enables you to integrate distinct applications. Apart from providing a wide variety of transports and APIs it also gives you concrete implementations of many of the EIPs. The glue between transports and EIPs is the Domain Specific Language (DSL) that supports type-safe smart completion of routing rules in Java code. The list of supported transports is huge, and in addition to many old-school transports like FTP, JMS, Rest Services, Web Services, etc. it includes many of the Amazon Web Services (AWS) components e.g. Elastic Compute Cloud (EC2), DynamoDB (DDB), Simple Email Service (SES), Simple Queue Service (SQS), and Simple Storage Service (S3) to name a few.
One of the Camel provided EIP implementations is the Idempotent Consumer. In order to detect duplicate messages it needs to retrieve the unique message id from the given message exchange. There are various ways to accomplish this e.g. by using an XPath Expression to fetch it from the message body. The unique message id can then be searched from an Idempotent Repository – if it’s found the message is already consumed, otherwise the message is processed and the id is added to the repository. One thing to note about the unique message id is that it should not be tied to any domain concept in order to keep the business logic separate from the integration infrastructure.
There are a couple of options that let you control how the duplicate message handling works. First, you can enable eager processing which means that Camel will add the id to the repository before the message has been processed in order to detect duplicates even for those messages that are currently in progress. By disabling eager processing Camel will only detect duplicates for those messages that have been successfully processed. Second, you can choose whether to skip duplicates or not. By enabling this option the duplicates are not processed any further. Otherwise message processing is continued and you are given the option to implement some custom logic for the duplicates. The following code snippet depicts the latter case i.e. messages are routed to different routes (duplicateMessages vs. newMessages) based on whether they are already processed or not.

Idempotent Consumer in a Camel route
 from(inputQueue).
    idempotentConsumer(header("messageId")).messageIdRepository(idempotentRepository).skipDuplicate(false).
    filter(property(Exchange.DUPLICATE_MESSAGE).isEqualTo(true)).
        to(duplicateMessages).
        stop().
    end().
    to(newMessages);
 

Consider that you’d like your application to work in a functional style i.e. always to return the same response for identical requests. This can be achieved by saving the response as part of the newMessages route and fetching it in the duplicateMessages route. In the order example context this would mean that no duplicate orders are created but instead an existing “order received” response is returned. Because duplicates are detected already in the integration layer the order processing application don’t need to be accessed at all. This is illustrated in the following picture.

Create your own Idempotent Repository

Regarding Idempotent Repository there are some alternatives to choose from. Camel provides the following implementations:

MemoryIdempotentRepository FileIdempotentRepository HazelcastIdempotentRepository JdbcMessageIdRepository JpaMessageIdRepository InfinispanIdempotentRepository

Besides these you are free to create your own Idempotent Repository implementation e.g. a NoSQL version that uses MongoDB as a repository. All you need to do is implement the IdempotentRepository interface and take it into use in a Camel route. One thing to note is that even though the term Idempotent Repository is used in Camel context, it doesn’t mean that the repository itself is idempotent. Rather it’s a repository for the Idempotent Consumer. Now that we’re clear about the terminology we can take a look at the skeleton version of the IdempotentRepository implementation which is depicted in the following code snippet.

MongoDbIdempotentRepository skeleton implementation
public class MongoDbIdempotentRepository implements IdempotentRepository<String> {
    @Override
    public boolean add(String key) {
        // add key to the repository according to the java.util.Set contract
    }
    @Override
    public boolean confirm(String key) {
        // confirm the key to the repository, after the exchange has been processed successfully
    }
    @Override
    public boolean contains(String key) {
        // check if the repository contains the key according to the java.util.Set contract
    }
    @Override
    public boolean remove(String key) {
        // remove the key from the repository (invoked if the exchange failed)
    }
    @Override
    public void start() {
        // start the service e.g. open up a MongoDB connection (a CamelContext lifecycle event)
    }
    @Override
    public void stop() {
        // stop the service e.g. close the MongoDB connection (a CamelContext lifecycle event)
    }
After the exchange has been processed the key is either confirmed to the repository or removed from it. This means that if the exchange failed it is possible to resend the message without it being detected as a duplicate i.e. only the successfully processed exchanges count. The start and stop methods originate from a Service interface which is extended by the IdempotentRepository interface and tie the implementation to the CamelContext lifecycle. In the MongoDbIdempotentRepository example they are used to open up and close the MongoDB connection respectively.

The idempotent repository only needs to keep track of the unique message id and the confirmed flag. The latter relates to the eager option discussed above i.e. in case it’s enabled the message can be considered as duplicate even though the exchange is not yet confirmed to be successfully processed. Otherwise only confirmed entries are taken into account when checking whether the repository contains the key or not. Looking at the MongoDB an idempotent repository entry might look like the following.

A MongoDB idempotent repository entry
{
    "_id" : ObjectId("579c43812adc2ab4e6e51df4"),
    "messageId" : "83502935223",
    "confirmed" : true
}

Depending on the use case it may not be desirable to keep the entries in the repository forever. Luckily MongoDB provides a couple of features that can be taken into use for expiring documents from a collection automatically. You may choose to use a time to live (TTL) collection feature by defining how long an entry is allowed to live in a collection. After the specified period of time mongod automatically removes expired entries from a collection. Another choice is to create the idempotent repository as a capped collection which is a fixed size collection. Once the collection fills its allocated space it starts overwriting the oldest documents in FIFO style in order to make room for new documents. Either way, you don’t have to worry about manually deleting old idempotent repository entries.

Marko Hollanti

Marko Hollanti

Linkedin profile

Do you know a perfect match? Sharing is caring

It has been possible to host static web sites on Amazon S3 for quite some time. Combined with the CloudFront CDN, this provides a fast and efficient way to reach global audiences. In addition, using S3 and CloudFront is typically cheaper than running your own web server (with or without CloudFront in front).
The basic problem with this setup is the lack of dynamic content. If you want to integrate typical web features, such as login and saving data, to a website hosted on S3, you still need to have a separate server running your API. And running your own server to host your API includes all the typical problems of running a server. You need make sure that the operating system is patched up, the firewall is secure and so on.
Amazon Lambda allows you to run simple scripts or programs in response to events. These programs should be small, stateless and only serve a singular purpose. Lambda runs these programs in response to events. The events can be triggered by actions like a file being uploaded to S3 or Amazon Kinesis stream. One interesting option is to trigger Lambda functions in response to REST API events. This is discussed in more detail later.
Running code on Lambda is billed by the time and RAM used. Billing is done based on actual resources that have been used and there is no need to pay for reserved or provisioned capacity. This makes it especially cost efficient to run seldom used code in Lambda. On the other hand, Lambda can also be used for high-throughput processing, as AWS automatically provisions capacity for Lambda functions as required. The only hard limitation is that a single function execution may not last longer than 300 seconds. Amazon has pricing examples on their Lambda billing page.
Amazon API Gateway allows you to publish and proxy REST APIs. These APIs can be point to any HTTP endpoint such as servers running on EC2 or public APIs on the internet. API Gateway also allows calling Lambda functions. This allows publishing Lambda functions as a REST API. Amazon API Gateway is priced $3.50 for 1 million requests and $0.09 per gigabyte of data transfer (as of 15Dec15).
So by combining API Gateway and Lambda, we can implement a fully functioning REST API without any servers that we need to manage. In addition, these Lambda functions are fully capable computer programs and may for example persist data on RDS or DynamoDB. Therefore we may combine CloudFront/S3 with API Gateway and Lambda to implement fully serverless web site or application. The basic architecture is illustrated in the following picture.
serverless (1)
 
The benefits of this setup include that there is only need to pay for used resources, not provisioned capacity. The code and content is fully hosted on different services and there is no need to manage individual servers and handle issues like security updates. In addition, API Gateway forces HTTPS connections and CORS headers, so your data should be secure in transit.
Drawbacks include lack of access to the servers running the code and, in the case of custom domains, the need to obtain SSL certificates. In addition, there is no way to control how Lambda provisions your code and you just need to trust that there are enough resources available for running. It should be noted that Lambda only allows you to adjust the amount of allocated RAM but this will also affect your CPU allocation. The more RAM you have, the more CPU you are given.
Managing Lambda applications and API Gateway routes is currently challenging. Pretty much everything needs to be explicitly mapped and allocated either manually or through some kind of automation. There are some tools such as Serverless to help setting this up but the tooling can still be called rudimentary.
I wrote a simple Lambda function that implements a counter using DynamoDB for storage. This is a simple node.js script that makes an API call to DynamoDB and returns the answer to the caller. It should be noted that no username or password is needed for DynamoDB as the rights are given through IAM.
 
To sum up, I think that hosting a simple website on S3 and Lambda is fully possible. This is especially viable for small websites with limited interactive functionality which can be implemented with a small amount of Lambda functions. However, implementing larger applications on top of API Gateway and Lambda might be challenging as all routing needs to be handled on API Gateway and all functions need to be managed separately on Lambda.
https://gist.github.com/lhahne/167c40baa7febdfc8f2b
we are hiring

Avatar

Lauri Hahne

Do you know a perfect match? Sharing is caring

If you are using some of the currently common frontend toolchains, chances are you are using Bower. In short, Bower is a package manager for JavaScript libraries and other frontend dependencies. It’s something that usually sneaks in with other build tools. It’s lightweight and does its job, so you’ll probably just set it up and forget it. Turns out you don’t need Bower and, in fact, it may actually be causing some of your problems. Let’s take a closer look.

What is wrong with Bower

As a Bower user, you probably are more than familiar with this sight:
bower-unable-to-find-suitable
You may be accustomed to giving little hints to Bower so it can figure out which package versions are compliant. When you skim through the listed requirements, you cannot escape the feeling that something feels inherently wrong when your package manager needs your help for doing its core job.
The reason, as it turns out, is that Bower doesn’t support nested dependencies. If the packages you use contain subdependencies, Bower resolves them into a flat dependency list, which must satisfy all subdependency requirements. If Bower cannot find a version of the dependency that meets all the conditions, then you get a conflict and must resolve it by hand. Bower can persist this choice into a resolutions list and it’s often not a big deal when you are only dealing with one or two of these resolutions. But the more dependencies you add, the more difficult it becomes to find versions that satisfy all of your dependencies.
If your packages are large and few in number, flat dependency management may even be a plus. After all, it means that Bower is checking you don’t mix non-compliant modules. But that’s also the problem: in an attempt to avoid conflicts, flat dependency management encourages you to use fewer dependencies that do more things at once. A good example of this is jQuery and its plugins. jQuery contains many commonly required features like AJAX and data manipulation methods. Libraries that require these features, often register themself as jQuery plugins. This allows these plugins to share some common functionality. But this approach doesn’t scale very well: If you need to share code between several plugins, you will find yourself reinventing dependency management on top of jQuery.
Bower is also redundant. Given the recent popular trends in adopting node and npm in frontend build tooling, you probably already have a package management system. Practices in asset modularization have also evolved, leaving Bower with a lot of boilerplate you can get rid of. This includes not only your own code, but the repositories you reference. Bower blindly pulls out all of their files, including internal tooling and build scripts you don’t actually need or want. While later on ignore support has been added, it is an opt-in configuration not all modules obey.
The problems don’t end here. Bower is also unreliable. It is actually not a real package repository, but rather just a way to handle metadata. So when Github gets DDOSed, your build breaks. In Bower’s defense, things have been improving on this front. Bower does local caching and is also beginning to support private registries. But things on this front are still far from perfect. Bower’s package cache is global by default so CI configuration is easy to get wrong, which may lead to parallel builds breaking randomly. Also, in the absense of SaaS-provided private registries, hosting a private Bower yourself is still an endeavour not to be undertaken lightly.
Just don’t get me wrong: back in 2012 Bower was a good tool and probably the only good one to manage frontend assets. But things have moved forward. And so should we.

Step 1 – Move dependencies from bower.json to package.json

You are probably already using npm as part of your frontend build tooling. Most of your frontend packages are probably already available on npm too, so there’s little reason to pull them from Bower instead of npm.
Let’s consider the following bower.json as an example:

{
  "name": "my-bower-project",
  "version": "0.1.0",
  "dependencies": {
    "angularjs": "~1.3.15",
    "jquery": "~2.1.3",
    "lodash": "~3.6.0"
  }
}

The equivalent file in npm is package.json. If you don’t have it already, generate it with:

npm init

Now, proceed to move your current bower dependencies from bower.json to your package.json.
For example, when can add jQuery as npm dependency by invoking:

npm install --save jquery

Npm loads jquery into your local node_modules folder, and stores relevant metadata to package.json.
Repeat this for all of the dependencies that are available on npm and you end up with something like:

{
  ...
  "dependencies": {
    "angular": "^1.3.15",
    "jquery": "^2.1.3",
    "lodash": "^3.6.0"
  }
}

You can now load the dependencies straight from node_modules folder just like you loaded your Bower components from bower_components. Just with this small change, you already gain benefits. Unlike Bower, npm provides you with a real package repository. Also, since you don’t need to load packages from third party repositories, your builds are less likely to break.

Step 2 – Use require instead of wiredep

Dependencies in Bower are commonly wired together with wiredep. Wiredep is a tool that determines the right order of loading for your Bower packages and injects them into your source file inside placeholder comments.
Npm doesn’t work like that. With npm, you just write your application as a collection of CommonJS modules. Any module can import its dependencies and export a public API for consumption by its clients.
Say you got app.js that depends on several modules: jQuery, Lodash and Angular.js. With require you simply add these as dependencies:

var $ = require('jquery'),
    _ = require('lodash'),
    angular = require('angular');

Note that require doesn’t work in browsers out-of-the-box. Now, instead of wiredep, you now need another additional tool for resolving and bundling together these modules.
One of the popular options for bundling is Browserify.
First, start by installing Browserify globally as a command-line tool:

npm install -g browserify

We can now pass our app script to browserify:

browserify app.js -o bundle.js

And out comes a full, browser-friendly bundle which can be included with a single script tag:

<script src="bundle.js"></script>

Stay productive by installing and using watchify to efficiently recompile your bundle on each change:

watchify app.js -o bundle.js

Step 3 – Addressing any further concerns

Often that’s all you need to do. Just remove Bower from your dependencies and start building on your new stack.
However, you may still have some lingering concerns that we’ll attempt to address next.

But but but… I use private repositories!

No worries! You can generally still use them with npm:

{
  ...
  "dependencies": {
     "lib1": "git+https://jsalonen@github.com/jsalonen/lib1.git",
     "lib2": "git+https://[TOKEN]:x-oauth-basic@bitbucket.org/jsalonen/lib2.git"
  }
}

When it comes to popular repository hosting services, Github is the easiest to use as it has support for basic authentication. Bitbucket doesn’t support that, but you can still pull repositories off it by supplementing your dependency with an OAuth access token. Similar methods apply to other Git repository hosting services.
Note that npm also supports private registries. When you find yourself getting accustomed to npm, you may find yourself splitting your code into a number of smaller, private modules. With a private npm registry like npm Enterprise or Sonatype Nexus, you can publish and depend upon these modules with minimal hassle. Additionally, you can use private registries to mirror public npm modules to speed up module downloads and make your infrastructure more resilient to problems in the public part of the registry.

But but but… I can’t get rid of all Bower dependencies!

If you want to start using browserify but don’t want to port all your packages over to npm, you can still pull some of the packages from Bower with debowerify.
Let’s say you want to add a dependency to typeahead.js. Here’s how to install it with Bower:

bower install --save typeahead.js

The npm way would be to load it using require:

var typeahead = require('typeahead.js');

To enable this way of loading, install and add debowerify as a dependency:

npm install --save-dev debowerify

Then just use it as a browserify transformation:

browserify -t debowerify app.js -o bundle.js

Note that you still need to specify any such dependencies in bower.json and install them with Bower.
Debowerify only works if the “main” entry is defined in the component’s bower.json.

But but but… I need modules outside Bower and npm!

Vendor scripts relying on browser DOM can be adapted to Browserify with browserify-shim.
If you need to use scripts relying on Asynchronous module definition (AMD), check out deAMDify.

But but but… my frontend and backend dependencies get mixed!

If you want to keep your server-side and browser-side applications decoupled, you can just create two separate npm modules and manage dependencies separately.
Sometimes decoupling server and client is not desirable. This could be the case if you are writing an isomorphic application (e.g. a single-page app that uses server-side prerendering).
If this is the case, specify dependencies in package.json as usual. Proceed to use the browser field in package.json to override file resolution for browser-specific versions of files.
Say that you want to use lodash on the client and server, but want to use a modern build on the server and compat build in the browser.
Begin by adding both lodash versions as npm dependencies:

npm install --save lodash
npm install --save lodash-compat

Proceed to add a browser field entry for lodash to hint that you want the compat build whenever lodash is required in the browser:

{
  ...
  "browser": {
    "lodash": "lodash-compat"
  }
}

Different versions of lodash are now used on the server and in the browser, allowing you to adapt your code to both environments!

Final words

I have to be honest with you: Bower isn’t really completely evil and I actually think it really works for a variety of tasks. Bower is also being actively developed, and some of the issues I pointed out in the beginning will probably be addressed in the future.
The bigger thing is that the JavaScript ecosystem has changed. Tools like npm and browserify allow us to modularize our applications without coupling our modules to the DOM and relying on global state. With the upcoming ES6 module standard even more is yet to come. So why don’t you jump on the module bandwagon today and even more great things will soon come your way!

Further Reading

npm and front-end packaging (The npm Blog)
Browserify VS Webpack – JS Drama (Namal Goel)
The jQuery Module Fallacy (Ben Drucker)
Why MongoDB Didn’t Conquer the World: Part 1  (Juhana Huotarinen)
 
We are always on the lookout for skilled developers to join our award-winning team. We have exciting projects running in all our locations – get in touch to find out more!

Avatar

Jaakko Salonen

Do you know a perfect match? Sharing is caring