This content originally appeared on DEV Community and was authored by Matheus Bernardes Spilari
Monitoring is great, but alerting is what actually wakes us up. In this post, I’ll show how we built a log-based alerting pipeline using Grafana, Loki, and Promtail, running entirely in containers.
Yes, alerts triggered from log patterns, not metrics. This approach is especially useful when monitoring behaviors like rate-limiter violations that may not expose traditional metrics.
Why Log-Based Alerts?
We were facing a challenge: we implemented a rate limiter in Nginx to prevent abuse, but traditional metrics collection (via Prometheus) couldn't capture rate-limit violations (HTTP 429 responses), because those were handled entirely at the reverse proxy level, not within the app itself.
The solution? Log everything. Then alert from logs.
🛠️ Tools Used
- Grafana
- Loki
- Promtail
- Nginx (with rate limiting configured)
- Docker Compose
- Mailtrap (for testing alert emails)
- Alertmanager (for email delivery)
Configuring Custom Access Logs in Nginx
In Nginx, you can define a custom log format using the log_format
directive and then tell Nginx to use it with the access_log
directive. Here's an example configuration:
http {
log_format structured '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log structured;
}
Explanation:
log_format structured
: This defines a custom log format namedstructured
. The format includes common fields such as client IP address, username, timestamp, request line, HTTP status code, response size, referrer, and user agent.access_log
: This tells Nginx to write access logs to/var/log/nginx/access.log
using thestructured
format defined above.
This setup allows you to customize what gets logged in each request, which is useful for debugging, analytics, or integrating with log management systems.
💡 Tip: The name
structured
is just a label. You can name your format anything you'd like, such ascustom
,detailed
, orjson_logs
depending on its purpose.
Set Up the Stack (Grafana, Loki, Promtail)
Our stack uses docker-compose
to orchestrate services.
Loki Configuration
At the root of our project we create a folder called loki
, inside of that, we gonna have two files, config.yaml
and Dockerfile
.
You can find this file here
config.yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: debug
grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
limits_config:
metric_aggregation_enabled: true
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
pattern_ingester:
enabled: true
metric_aggregation:
loki_address: localhost:3100
ruler:
alertmanager_url: http://alertmanager:9093
frontend:
encoding: protobuf
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
# reporting_enabled: false
💡 Tip: The alertmanager we configured in the last post, we are using his docker url.
Dockerfile
FROM grafana/loki:latest
COPY config.yaml /etc/loki/config.yml
ENTRYPOINT ["/usr/bin/loki", "--config.file=/etc/loki/config.yml", "--config.expand-env=true"]
Promtail Configuration
Promtail tails the access.log
file in the Nginx container and pushes it to Loki.
We are going to create a folder at the root called promtail
and inside of that two files config.yaml
and Dockerfile
.
config.yaml
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: nginx-logs
static_configs:
- targets:
- localhost
labels:
job: nginx
__path__: /var/log/nginx/access.log
💡 Tip: You can find the config file right here.
Dockerfile
:
FROM grafana/promtail:latest
COPY config.yaml /etc/promtail/config.yml
ENTRYPOINT ["/usr/bin/promtail", "-config.file=/etc/promtail/config.yml"]
Explore Logs in Grafana
In the grafana dashboard, go to Connections > Data Sources > Add Data Source > Seach for: Loki
Configure the connection url to http://loki:3100
Once Loki is added as a data source, you can use Explore in Grafana to query for specific patterns like:
{job="nginx"} |= "429"
We tested the rate limiter by sending many requests to the same endpoint — triggering 429s — and confirming they were logged.
Create Alerts on Rate Limit Violations
Now the cool part — alerting when rate limiting is consistently violated:
Loki Query (in Grafana alert):
count_over_time({job="nginx"} |= "429" [1m]) > 5
This triggers an alert if more than 5 requests per minute result in a 429 status.
Alert description:
- Summary:
Rate limiter triggered
- Description:
The rate limiter was triggered more than 5 times in the last minute.
Send Alerts via Alertmanager
Alerts from Grafana are routed to Alertmanager, which is configured to send emails via Mailtrap:
global:
smtp_smarthost: 'sandbox.smtp.mailtrap.io:2525'
smtp_from: 'alertmanager@email.com'
smtp_auth_username: '<your_username>'
smtp_auth_password: '<your_password>'
route:
receiver: email-alert
receivers:
- name: email-alert
email_configs:
- to: 'devs@yourcompany.com'
send_resolved: true
In Grafana, configure a Contact Point that targets Alertmanager.
Improving Email Content
We customized our email template in Alertmanager to include meaningful data, especially since log-based alerts can have dynamic labels like:
html: |
<h2>[{{ .Status | toUpper }}] {{ .CommonAnnotations.summary }}</h2>
<p>{{ .CommonAnnotations.description }}</p>
<ul>
{{ range .Alerts }}
<li>
<strong>Alert:</strong> {{ .Labels.alertname }}<br/>
<strong>Filename:</strong> {{ .Labels.filename }}<br/>
<strong>Service:</strong> {{ .Labels.service_name }}<br/>
<strong>Time:</strong> {{ .StartsAt }}
</li>
{{ end }}
</ul>
This ensures that alerts triggered by logs still include all the context needed to debug them.
Update our docker compose
promtail:
container_name: promtail
build:
context: ./promtail
networks:
- app_network
ports:
- "9094:9094"
volumes:
- ./nginx/logs:/var/log/nginx
loki:
container_name: loki
build:
context: ./loki
networks:
- app_network
ports:
- "3100:3100"
volumes:
- ./loki:/loki
- loki_data:/var/loki
depends_on:
- promtail
Final Result
With this setup:
- Nginx limits abusive traffic and logs violations.
- Promtail forwards those logs to Loki.
- Grafana detects violations with log queries and fires alerts.
- Alertmanager sends customized emails to our dev team.
No code change in the app. Just pure log power.
Conclusion
Using Grafana and Loki for log-based alerting opened up an entire layer of observability for us. It’s especially powerful when paired with infrastructure-level tools like Nginx rate limiting.
📍 Reference
💻 Project Repository
👋 Talk to me
This content originally appeared on DEV Community and was authored by Matheus Bernardes Spilari

Matheus Bernardes Spilari | Sciencx (2025-08-04T13:00:00+00:00) Creating Log-Based Alerts with Grafana, Loki, and Promtail. Retrieved from https://www.scien.cx/2025/08/04/creating-log-based-alerts-with-grafana-loki-and-promtail/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.