👀Monitoring
Visibility and control over your application
Last updated
Visibility and control over your application
Last updated
In this chapter, you will learn to setup logs and monitoring on your application. Hyperswitch relies on Promtail, Loki, OpenTelemetry and Grafana for its logs and metrics. In this guide, we will delve into these tools and assist you in setting them up efficiently.
In the world of application monitoring, two critical elements play pivotal roles
Element | What is it | Why is it required |
---|---|---|
To effectively utilize both aspects, Hyperswitch relies on the following
Promtail (for scraping logs)
Grafana's Loki (for storing and viewing logs)
OpenTelemetry collector (for application metrics)
Cloudwatch (for system metrics)
This combination, along with Grafana for visualization, seamlessly integrates logs and metrics into intuitive, interactive dashboards.
Grafana Loki: Grafana Loki is a standout in log aggregation. It draws inspiration from Prometheus and is designed to be horizontally scalable and highly available, with a focus on multi-tenancy. Unlike traditional logging systems, Loki doesn't index the content of logs but concentrates on a set of labels associated with each log stream. This approach not only keeps costs in check but also ensures swift access to logs during queries. Refer here for more details regarding Loki.
Promtail: Promtail serves as the agent that powers Loki by collecting logs. Tailored specifically for Loki, Promtail runs on each Kubernetes node and utilizes the same service discovery mechanisms as Prometheus. Before sending logs to Loki, Promtail labels, transforms, and filters them to ensure that only relevant data reaches Loki. This data processing streamlines the logging process. Additionally, Loki boasts its own query language, LogQL, which is compatible with its command-line interface and Grafana. Integration capabilities with Prometheus's Alert Manager further solidify Loki's position as a pivotal tool in modern logging. To know more about Promtail refer here.
By following these installation steps, you can set up Grafana Loki and Promtail effectively, enabling comprehensive logging capabilities for your application monitoring needs.
Step 1: Install helm
If you're using MacOS and don't have Helm installed, you can easily install it using the following command with Homebrew:
Step 2: Install Loki
Once Helm is installed, you can proceed with the installation of Loki. Loki can be installed in various modes, and here, we provide a setup guide for installing Loki in a scalable monolithic mode.
Make sure you install grafana/loki in a specific kubernetes namespace that you desire using command
Step 3: Install Promtail
To set up the endpoint for Loki's gateway, which Promtail will use to transmit logs, we need to specify it in the Promtail chart's configuration values. In our specific scenario, the designated endpoint is "loki.grafana-loki.svc.cluster.local." Let's proceed by incorporating this endpoint into the Promtail chart values.
First, let's obtain the basic values for the Grafana Promtail chart and store them in a file named "promtail-overrides.yaml" by running the following command
Next, open the promtail-overrides.yaml
file and locate the section that specifies the Loki clients' URL. Replace the existing URL:
with the new endpoint:
With the endpoint correctly updated in the configuration, we are now ready to deploy Promtail. Execute the following command and patiently wait for all pods to reach a "Ready" state:
By following these steps, you will configure Promtail to utilize the specified "loki.grafana-loki.svc.cluster.local" endpoint for log transmission to Loki, ensuring seamless integration into your monitoring environment.
You can proceed with the installation of the Helm chart for Grafana using the following commands:
It's worth noting that in a standard installation, the Grafana service is of type ClusterIP. However, if you are using MetalLB as a network load balancer in your cluster and have configured the service type as LoadBalancer, you can disregard this information. We will address port-forwarding for the service at a later stage.
To configure Grafana's data sources and dashboard, follow these steps:
Port-forward Grafana to access it locally:
You can also choose to expose Grafana differently, such as assigning it an external IP via a Load Balancer or setting up an Ingress route depending on your preference.
Obtain the login credentials:
By default, the Grafana username is "admin." However, you'll need to retrieve the password. First, list all the Secrets in your namespace:
Locate the secret containing the password. To extract and decode it, use the following command:
This will provide you with the password required to log in.
Log in to Grafana using the obtained credentials.
Add Grafana Loki as a data source:
Access the Grafana interface via http://localhost:8080/ or the respective URL if you have exposed it differently.
Navigate to the Data Sources section in the Grafana UI.
Click on "Add data source."
Configure the data source with the following details:
Name: Choose a descriptive name for the data source.
Type: Select "Loki" from the list of available data sources.
HTTP URL: Use the endpoint of the Grafana Loki gateway service. In your case, it appears to be: http://loki-loki-distributed-gateway.grafana-loki.svc.cluster.local.
Test the data source to ensure it's working correctly.
Save the data source configuration.
Now you have successfully configured Grafana's data source with Grafana Loki, and you can proceed to create dashboards and visualize your data. Similarly, you can configure Prometheus for metrics.
Logging from the payment checkout web client is crucial for tracking and monitoring the flow of payments. It provides a transparent record of events, errors, and user interactions, aiding developers and support teams in identifying issues, debugging, and ensuring the security and reliability of payment processes. Well-implemented logging enhances traceability and facilitates a more efficient resolution of potential problems in the payment checkout experience.
Logs are sent to the server via non-blocking Beacon API requests. This means that even if the logging endpoint configured is incorrect, it would not affect the core payment functionalities. You can find more about the structure of logging request payload in the beaconApiCall
function in the OrcaLogger.res
file.
If you want to collect logs, you can do so by setting up an endpoint on your server to receive, process and persist logs.
In webpack.common.js
, you would have to enable the logging flag, and configure the logging endpoint and log level.
Once you've completed the aforementioned steps for logging and monitoring, you can initiate a payment via the SDK and trace it within the logging dashboard using identifiers such as the request ID or order ID.
Logs
Logs are a running diary of all the activities that happen inside the application
Useful for tracking, debugging, and auditing
Metrics
Metrics are like measuring sticks (like a counter) highlighting the performance of the different parts of the application
Used to assess, analyze, and track various aspects of a system/application providing data-driven insights