The base image aims at providing the system-wide requirements such as the operating system and other basic components as docker, docker-composes, node_export… However, we needed to be able to deploy new versions of our applications without rebooting an instance with a new image. This image will run inside containers that docker-compose will pull from a private Scaleway registry. It allows to easily recreate images when needed.įinally, we created a front container image which gathers the web application code ( React) and the API code ( Node.js). Note that the base and Jitsi images are created with Ansible playbooks. When a Jitsi instance boots with this image, a docker-compose will start and the Jitsi server which is running as a container will automatically start working as well. We also added an Nginx Prometheus exporter on docker-jitsi-meet docker-compose for monitoring purposes. On this base image, we installed the requirements to run containers with Docker, docker-compose and a node_exporter that is used by our Prometheus monitoring system to know, among other information, the CPU usage of the machine.įrom the base image, we then created a Jitsi image using the official docker compose distribution: docker-jitsi-meet. In each cloud deployment, instances are booted with a specific cloud image that is designed to meet the specific requirements of the instance.įirst, we created a base image called base, which was the starting point for all the others. When creating an instance, you have to select or create an image. Now we are going to complete this Terraform module by enabling those instances to serve our application. The API instances query the Prometheus to identify what are the CPU usage on all Jitsi servers and return them to the web application. Prometheus scraps the state of each Jitsi state and, in particular, the CPU usage of each Jitsi server. These instances run the Jitsi videoconference solution. At the moment, we created more than 100 of those ( DEV1-L type). The most important are the Jitsi servers. We created all the required instances to make this application run: For that, we used the pg backend in Terraform. To ensure consistency across concurrent Terraform execution, the terraform state is persisted in a Scaleway Database PostgreSQL managed instance. All changes applied to our infrastructure are tracked in a git repository. We decided to use the Scaleway Terraform Provider to manage all our infrastructure from a single versioned place. Terraform is an infrastructure tool that manages cloud resources in a declarative paradigm. Now that we explained the general architecture and the typical user workflow of this application, let's see how it is deployed using infrastructure as code technologies. With that URL, a user can easily connect to the Jitsi server and start enjoying the call with an optimal sound and video quality.Īll Jitsi servers are deployed on Scaleway Instances which can hold a large number of concurrent video bridges. The web application then selects the Jitsi server that has the most CPU available and returns the URL to the user. The stateless API is composed of a front website in React and an API that will query a Prometheus (every 30 seconds) to get a list of all the Jitsi servers available and their current CPU usage.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |