Below is an example to deploy it on Google Cloud Platform. But similar approach can be done for any cloud, AWS or Azure
In today’s Enterprise, it is crucial to have a secure messaging portal where we can have multiple teams/groups set up. It should be easy to share information and files, with efficient search options.
Although we do have several messaging services available, security is the biggest challenge with these kinds of messaging applications. We have used Mattermost considering the security and flexibility it provides. Mattermost is an open-source, hybrid cloud alternative to proprietary SaaS messaging for teams.
As MatterMost is a self-hosted application, we can achieve large security control, privacy, and legal compliance with it.
Other major features of Mattermost are below:
- Multi-language Support – It supports around 16 languages
- Active Directory/LDAP login and certificate-based authentication within high-security networks.
- Facility to Integrate plugins
- It is Open Source
- Supported on multiple devices, private cloud.
- It can highlight the syntax, supports Emojis and friendly Channel names
For my Project, I decided to use the Kubernetes on Google Cloud(GKE) for automated deployment of Mattermost. Deploying the Mattermost Operator on Kubernetes provides a single common installation and management method that can be implemented in practically any environment with less IT overhead and more automation. Few other reasons to use Kubernetes were Container orchestration, scalability and modularity, reducing resource costs, high availability, outstanding community and industry support.
With GKE’s Cluster Autoscaler, scaling has never been easier- if some node is underutilized and all pods running on it can be easily moved elsewhere then the node is deleted. Additionally, if there are no resources in the cluster to schedule a recently created pod, a new node is added.
When you run a GKE cluster, you also gain the benefit of advanced cluster management features that Google Cloud provides. These include:
- Google Cloud load-balancing for Compute Engine instances
- Node pools to designate subsets of nodes within a cluster for additional flexibility
- Automatic scaling of your cluster’s node instance count
- Automatic upgrades for your cluster’s node software
- Node auto-repair to maintain node health and availability
- As GCP already has the Monitoring option integrated, it helps to keep a track of the usage and events and send notifications based on different conditions through various Notification Channels.
I used Elasticsearch and Kibana to monitor and analyse any problem in the environment. Elasticsearch is a powerful open-source search and analytics engine used for full-text search and for analyzing logs and metrics. Kibana is an open-source visualization and exploration tool for reviewing logs and events. It accesses the logs from Elasticsearch and displays the information in the form of a dashboard, graphs, reports, maps, etc. The visualization makes it easy to predict or to see the changes in trends of errors or other significant events of the input source.
Another tool I used for monitoring the Kubernetes cluster is Prometheus.
This is again a free software application used for event monitoring and alerting. Prometheus collects metrics via a pull model over HTTP. This approach makes shipping application metrics to Prometheus very simple. Prometheus helps us to use the collected metrics data and make proper scaling decisions based on it.
A GKE Kubernetes cluster was created. A cluster consists of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.
The number of nodes in the GKE cluster here is 3. LoadBalancer was used for autoscaling, etc. Google Cloud load balancing is a managed service, which means its components are redundant and highly available. If a load balancing component fails, it is restarted or replaced automatically and immediately.
MySQL Operator, MinIO Operator, NGINX Ingress Controller, and Mattermost Operator were installed using the kubectl tool for the Mattermost application. For Prometheus installation, Tiller (Helm server) is used.
To access the Prometheus Alert Manager and Prometheus server URL from outside on ports 9090 and 9093 respectively, we exposed the PODs as LoadBalancer. Similarly, Elasticsearch and Kibana were installed using proper yaml files. Just like Prometheus; Elasticsearch and Kibana can also be exposed as Load Balancer to access externally.
Could you please post more step by step instructions too?