Integrating with Fitbit APIs

Wearable fitness sensors have gained popularity recently. Cheap devices measure general activity level through motion detection. More expensive devices measure more precise physiological data such as heart rate. In addition, these devices save the measurement results either in a phone or in the cloud. Fitbit is one of these manufacturers. They make portable fitness trackers and online scales.
IMG_0918
A Fitbit tracking device
Fitbit provides API access to their measurement data. This allows to fetch and integrate their measurements to other data. For example you might have a Fitbit scale and activity tracker. The API would allow you to fetch your weight and activity levels. A typical use case for this would be to integrate Fitbit data with data from some other source. This other source could be some other service or the user could manually provide some extra information that they want to quantify and compare with the Fitbit data.
One interesting measurement that Fitbit doesn’t provide is waist circumference. If you intend on losing or gaining weight, this is almost as important a measurement as your weight. To demonstrate how Fitbit’s API works, I build a simple application that fetches users weight from Fitbit and allows the user to input their waist circumference. The source code for this can be found at https://github.com/lhahne/bulker. If you have a Fitbit scale and a Fitbit user account, you can access this app at https://bulkest.herokuapp.com/.
bulker
The dataflow of this app is quite simple. Fitbit automatically collects their measurement from user’s devices. My node app then reads this data from Fitbit’s API and displays it to user for validation. Once user enters their waist circumference, both the weight (from Fitbit) and the waist measurement are saved to mongodb.
Authenticating with Fitbit
Fitbit’s oauth2 implementation seems to be somewhat different from what passport-oauth2 expects. Fortunately another package, passport-oauth2-fitbit, provides support for Fitbit’s authentication. This package only requires you to provide your API keys and to configure a callback for the authentication. Here is a short example.
https://gist.github.com/8cda593b43462914b8ae
Getting data from Fitbit
I decided to perform my API calls to Fitbit from client instead of my node server. The reason for this is, that performing the calls from server would require duplicating Fitbit’s API on my own server. To achieve this I need to transmit current user’s API from my node backend to the frontend. This is quite simple with express and passport.
https://gist.github.com/8dd60efe237bcc10cad8
Thereafter I fetch this token and define a function that can be used to make API calls in my frontend.
https://gist.github.com/3a3c8adaa492b560afe5
And finally we can fetch weight data from the api.
https://gist.github.com/26a085a0136e1f547d7f
I implemented my frontend using two react apps. First one reads measurement data and integrates it with Fitbit’s data. Second app fetches data from backend and displays it to the user. This provides a nice separation on concerns and modularity.
The application currently has only a very crude user interface. In addition, some graphs would make displaying the measurements more user friendly.
The application only currently supports Fitbit but integrating with other services would be possible. The app currently uses Fitbit’s user accounts for authentication, so handling other authentication methods and keeping track of different API tokens will require some extra work.

Avatar

Lauri Hahne

Do you know a perfect match? Sharing is caring

The year 2015 is about to end and it’s time to sum up once again a great year in software development. This blog post promotes two big technologies of the year 2015. Spring Boot and Docker.
Containerization has become one the most talked evolution in web application infrastructure that drive cloud native software architectures. Cloud native software usually relies on microservice architectural style where a software system is developed as a suite of small self containing services each running its own process and an isolated database. Services communicate with lightweight mechanisms, usually via HTTP resource APIs. Adopting microservice architectural style requires heavy automation of infrastructure and easily deployable applications. The new architectural style also puts the development process under heavy stress test.
Containers have existed in Linux for a long time, but not until Docker they have been a common build block of web application infrastructure. Docker is a toolset around Linux containers that allowed the mainstream software industry to adopt container technology. Differing from virtual machines, containers are a process isolation technique a lot more lightweight than a traditional virtual machine. Containers require smaller memory footprint and allow bootstrapping of the system just in few seconds. This makes them perfect solution for running cloud native applications built with microservices architecture.
Spring Boot is one of the most popular technologies to bundle and bootstrap Java or Groovy based web applications. Spring Boot provides an embedded application server, convention over configuration based autoconfiguration for Spring Framework and lot of production ready features such as metrics, health check and externalized configuration. Spring Boot serves as a main building block for cloud native web applications.

Packaging the Spring Boot demo application

The demo application used in this exercise is fairly simple URL shortening service, available at Gofore GitHub account. The application itself is not that interesting, but the way it is packaged and run on Docker makes it really interesting. Spring Boot 1.3 introduced two groundbreaking new features: fully executable packaging and hot restarting.
Fully executable packaging works by embedding a small script at the front of the jar or war file. Spring Boot applies repackaging of the application package to do its magic. This can be achieved with Maven or Gradle plugins. Repackaging with embedded script might break some tools so use the new feature with caution. The script adds automatic detection of init.d service (start|stop|restart|status), which allows the packaged application just be symlinked under init.d to make it an operating system level service. The embedded script also works with systemd.
Configuring the Maven plugin for fully executable packaging requires the following configuration:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-spring-boot-maven-plugin-xml
And after packaging the application creating init.d service goes like a breeze:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-symlink-app-sh

Live deploy with hot restart

Hot restarting of the application is a bit different approach than hot swapping of classes that for example JRebel does. Spring Boot’s hot restarting uses the same idea that Play Framework has been successfully using. The application does some tricks to internally restart the application as fast as possible. With a such small project like gofurl the cold start time goes from 6 seconds to under 1 second with hot restart. And everything happens automatically, the changed files just need to be recompiled and Spring Boot detects the changes from classpath. Hot restart approach provides a really fast development cycle from code change to live testing and it works more reliably than hot swapping tools like JRebel or Spring Loaded.
Spring Boot’s hot restart feature can be enabled by just adding the developer tools dependency to your project manifest. With maven use the following:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-spring-boot-devtools-xml
Optional property prevents the developer tools to float as a transitive dependency to projects depending from the main project. Hot restart is enabled by default and must be explicitly disabled by setting spring.devtools.restart.enabled property to false. Developer tools is also automatically disabled if the application is started from a fully packaged bundle.
Hot restart works straight from your IDE after the changed files are compiled. In IntelliJ Idea files must be manually compiled and in Eclipse automatic compiling triggers the hot restart. Hot restarting is also supported when running Spring Boot application with the Maven or Gradle plugins.

Provisioning immutable Docker containers

Productions environments should be immutable by default and Docker provides a foundation for immutability by layered read only file systems. Multiple file systems are layered on top of each other providing operating system, libs, application runtime and finally the application itself. Normal development process with Docker requires few steps: 1) Change application code. 2) Build docker image. 3) Run docker image.
The demo application uses Spotify’s Docker Maven plugin to create a docker image from the project:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-maven-plugin-xml
The plugin configuration defines the Docker image name to be created, where the Dockerfile can be loaded and what resources should be bundled to the image. The project can be build into a docker image with the following Maven command:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-build-sh
The application requires MongoDB database, and as Docker was a process isolation technique, we should run one command per container. This is where Docker Compose comes to help. Docker Compose is a tool that provides a way to orchestrate multiple docker containers and their dependencies. If we define that gofurl container depends a MongoDB container, Docker Compose can automatically start all the dependencies. For this project three containers are used: one for the application itself, one for the MongoDB binary and one to preserve the MongoDB data files. This allows updating the MongoDB container without losing the stored data.
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-compose-prod-yml
After building the project with Maven, Docker Compose can start the application with:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-up-prod-sh
When automated this process requires maybe tens of seconds or at most a minute or so. But for development, it is still too slow. A code change should reflect to the running application immediately, or at least in few seconds.

Enter the voodoo

Can we combine hot restarting and running application in a container? Oh yes we can! But does it make sense? At least for large microservice systems requiring tens of services Docker Compose plays a well role orchestrating and bootstrapping all the services.
Docker can run one process, which can be the Maven command for running Spring Boot application. And Spring Boot can be configured to do hot restart on classpath changes. The only problem is that by default Docker containers should be immutable. But we can change this by adding a volume from the host inside the container. The volume contains the application classpath. Now the only requirement for the Docker image is that it contains JDK and Maven binary to run the Spring Boot Maven plugin.
There’s also another problem with Maven, also known as “downloading the whole internet”, which means that when you don’t have local dependency cache, a lot of dependencies will be downloaded from the internet. This can be avoided by mapping the local .m2 repository inside the container so it can use your cache instead of downloading all the dependencies every time the application is started.
Now let’s change the container definition for gofurl to a development version that uses the described technique:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-compose-dev-yml
If you try this with default settings of the Spring Boot Maven plugin, you will realize that nothing happens even if files are changed in your host’s classpath. This is because the application runs inside Maven’s classloader which prevents Spring Boot to do its hot restart magic. The solution is to fork the application in its own JVM process. Now we can define a Dockerfile that starts the application with Maven plugin inside the container.
https://gist.github.com/trautonen/eff9b4066077053282dc#file-dockerfile
The Docker image is based on the official Java 8 image and only downloads and adds Maven to PATH. The image does not know anything about our application, because we are relying on the volume mapping to provide the application classpath. The command starts Maven with spring-boot:run goal that fires up the application. Additional JVM arguments are provided to bind the MongoDB URI to the linked MongoDB container. When JVM arguments are defined, the plugin will fork automatically, but fork option is required if no arguments are needed.
Note that Docker runs as root and you should have the application classpath compiled before starting the container. Otherwise your local user won’t be able to overwrite the files in the classpath. When everything is in place and the development version of the application can be started with Docker Compose:
https://gist.github.com/trautonen/eff9b4066077053282dc#file-docker-up-dev-sh
The application should be started now and if you make changes in your IDE and compile the changed sources, magic happens! The application inside the running container is updated and automatically restarted. Was all this worth the trouble? You could just run your Spring Boot application from the IDE and expose the MongoDB container port for the application. And you still need JDK and all the tools to compile the sources on host machine to provide a working classpath. I haven’t tested but the same approach should also work with Play Framework. Play has one advantage over Spring Boot. It also bundles a compiler so if you are brave enough to change your Scala code with just a text editor, you don’t even need JDK or IDE on your host machine to make live changes to an application running inside a container.

Avatar

Tapio Rautonen

Linkedin profile

Do you know a perfect match? Sharing is caring

It has been possible to host static web sites on Amazon S3 for quite some time. Combined with the CloudFront CDN, this provides a fast and efficient way to reach global audiences. In addition, using S3 and CloudFront is typically cheaper than running your own web server (with or without CloudFront in front).
The basic problem with this setup is the lack of dynamic content. If you want to integrate typical web features, such as login and saving data, to a website hosted on S3, you still need to have a separate server running your API. And running your own server to host your API includes all the typical problems of running a server. You need make sure that the operating system is patched up, the firewall is secure and so on.
Amazon Lambda allows you to run simple scripts or programs in response to events. These programs should be small, stateless and only serve a singular purpose. Lambda runs these programs in response to events. The events can be triggered by actions like a file being uploaded to S3 or Amazon Kinesis stream. One interesting option is to trigger Lambda functions in response to REST API events. This is discussed in more detail later.
Running code on Lambda is billed by the time and RAM used. Billing is done based on actual resources that have been used and there is no need to pay for reserved or provisioned capacity. This makes it especially cost efficient to run seldom used code in Lambda. On the other hand, Lambda can also be used for high-throughput processing, as AWS automatically provisions capacity for Lambda functions as required. The only hard limitation is that a single function execution may not last longer than 300 seconds. Amazon has pricing examples on their Lambda billing page.
Amazon API Gateway allows you to publish and proxy REST APIs. These APIs can be point to any HTTP endpoint such as servers running on EC2 or public APIs on the internet. API Gateway also allows calling Lambda functions. This allows publishing Lambda functions as a REST API. Amazon API Gateway is priced $3.50 for 1 million requests and $0.09 per gigabyte of data transfer (as of 15Dec15).
So by combining API Gateway and Lambda, we can implement a fully functioning REST API without any servers that we need to manage. In addition, these Lambda functions are fully capable computer programs and may for example persist data on RDS or DynamoDB. Therefore we may combine CloudFront/S3 with API Gateway and Lambda to implement fully serverless web site or application. The basic architecture is illustrated in the following picture.
serverless (1)
 
The benefits of this setup include that there is only need to pay for used resources, not provisioned capacity. The code and content is fully hosted on different services and there is no need to manage individual servers and handle issues like security updates. In addition, API Gateway forces HTTPS connections and CORS headers, so your data should be secure in transit.
Drawbacks include lack of access to the servers running the code and, in the case of custom domains, the need to obtain SSL certificates. In addition, there is no way to control how Lambda provisions your code and you just need to trust that there are enough resources available for running. It should be noted that Lambda only allows you to adjust the amount of allocated RAM but this will also affect your CPU allocation. The more RAM you have, the more CPU you are given.
Managing Lambda applications and API Gateway routes is currently challenging. Pretty much everything needs to be explicitly mapped and allocated either manually or through some kind of automation. There are some tools such as Serverless to help setting this up but the tooling can still be called rudimentary.
I wrote a simple Lambda function that implements a counter using DynamoDB for storage. This is a simple node.js script that makes an API call to DynamoDB and returns the answer to the caller. It should be noted that no username or password is needed for DynamoDB as the rights are given through IAM.
 
To sum up, I think that hosting a simple website on S3 and Lambda is fully possible. This is especially viable for small websites with limited interactive functionality which can be implemented with a small amount of Lambda functions. However, implementing larger applications on top of API Gateway and Lambda might be challenging as all routing needs to be handled on API Gateway and all functions need to be managed separately on Lambda.
https://gist.github.com/lhahne/167c40baa7febdfc8f2b
we are hiring

Avatar

Lauri Hahne

Do you know a perfect match? Sharing is caring