Joosa is an experienced fullstack software developer with a very broad skill set. He is always eager to learn and try out new things, regardless of whether that is with backend, frontend, devops or architecture. He has an agile mindset and always strives after clean and testable code. Joosa graduated from the University of Helsinki and wrote his master's thesis on AI and optimization algorithms.
Recently, for about a year and a half, I was working as a developer on a bleeding edge, business changing, disruptive solutions project. I can not say much about the business or the customer itself, but I thought I would share some of my experiences on what and how we did things.
Our team consisted of a Scrum Master, a UI/UX designer and full-stack developers, but the whole project had multiple teams working across the globe towards common goals using a Scaled Agile Framework (SAFe). Our team’s primary focus was to implement the web UI and the higher layers of the backend stack. We also contributed to the overall design and helped with coordination between all the product owners and different teams.
One of the best things in the project was to learn and use a huge amount of different bleeding-edge open source technologies.
While microservices on the backend are becoming very common, this project also used micro-frontends. This approach is rarer, but the benefits are quite similar: Different teams are able to work on different parts of the frontend independently since they are loosely coupled. New micro-frontends can also be written in different languages and using different technologies. This way switching to anew technology does not require rewriting all the existing functionality. As a technology of choice for combining the micro-frontends, we started with Single-SPA, but later switched to an iframe based approach. Using iframes made development and testing easier and improved our capabilities for continuous deployment.
This second solution turned out to work quite nicely. The only big challenge was related to showing full-screen components, such as modal dialogs. The iframe of a micro-frontend can only render content within itself. So, when it needed to open a modal dialog, it had to send a cross-window message to the top-level window, which then was able to do the actual rendering correctly on top of everything.
For frontend unit tests, we used Jest, Enzyme and Storybook snapshots, while end-to-end testing was done with TestCafe. Once again, it was seen that end-to-end tests are tricky to write – and quite a burden to maintain. Thus, choosing their scope carefully to get the best cost-value ratio is important, no matter what the tool that is used. Nevertheless, we were quite happy with TestCafe compared to available alternatives.
The backend of the system as a whole was very complex. The dozens of microservices in the lower layers were mostly done with reactive Java and they utilized, for example, event sourcing architecture. On top of those, our team built around 10 microservices with Node.js. The communication between services was mostly based on RESTful APIs, which our services implemented with either Express or Koa. In many cases, also Apache Kafka was used to pass Avro-serialized messages between services in a more asynchronous and robust manner. To provide real-time updates to the UI, we of course also used websocket connections. We learned that in some cases those Kafka messaging based approaches may work very well. Still, there is definitely also a pitfall of over-engineering and over-complexity to be avoided.
In the persistence layer we started with CouchDB as a document database, but later on, preferred using PostgreSQL relational database in most of our cases. With the latter one, we used Knex for database queries and versioned migrations, and Objection for Object-Relational Mapping. For our use cases, we did not really need any of the benefits of a document database, especially since nowadays PostgreSQL also supports json data columns to provide flexibility to the standard relational data model when needed. On the other hand, the benefits of a relational database such as better support for transactions and data migrations were important for us.
Some essential parts of the backend infrastructure were also Kong as the API gateway and Keycloak as the authorization server. Implementing complex authorization flows with OAuth 2.0, OpenID Connect and User Managed Access (UMA 2.0) was one of our major tasks in the project. Another important architectural piece, which took most of our time in the latter stages of the project, was implementing support for Open Service Broker specification.
In the backend, we used Mocha framework for unit testing but usually preferred to write the assertions with Chai. Mocking the other components and API responses were covered with Sinon and Nock. Overall, our backend stack was a success and, at least for me, a pleasure to work with.
All the services in the project were containerized with Docker and for local development we used Docker-Compose. In production, the containers were running in OpenStack and orchestrated with Mesos and Marathon. Later on, we also started a journey in moving towards Kubernetes instead. For continuous integration and delivery, we used Gitlab CI/CD pipelines. I also liked our mandatory code reviews of every merge request. In addition to assuring the code quality, it was a very nice way to share knowledge and learn from others.
In a large scale project such as this, carefully implemented monitoring and alerting systems are, of course, essential. Different metrics were gathered to Prometheus from all the services and exposed through Grafana, while all the logs were made available in Kibana. We also used Jaeger as an implementation to OpenTracing API, which allowed us to easily trace how requests flowed between different services and what was the origin of any errors.
The main challenges were related to the fact that running such a huge project completely on a local workstation during development is impossible. We investigated a hybrid solution, where some of the services would run locally and some on a development cloud, but found no easy solution there. As the project and the number of micro-services continued to grow, we were getting close to a point where a better solution would have needed to be discovered. For the time being, we worked around the problem by just mocking some of the heavier low-level services and making sure our workstations had plenty of memory.
In summary, this was a fun and challenging project to work with. I’m sure everyone learned tons of new skills and gained a lot of confidence through this project. I want to send my biggest thanks to everyone involved!