Skip to content

Monitoring

Observability setup for the Freeze Design webshop.

Monitoring Stack

Tool Purpose Dashboard
Sentry Error tracking sentry.io/freezedesign
PostHog Product analytics eu.posthog.com
Upptime Uptime monitoring GitHub Pages

📊 PostHog Setup: See the full PostHog Analytics Guide for configuration, custom events, feature flags, and A/B testing.

Sentry Setup

Backend Errors

Sentry captures all Django errors automatically:

# settings.py
SENTRY_DSN = os.getenv('SENTRY_DSN')

View errors at: Sentry Dashboard → Issues

Frontend Errors

Next.js errors are captured via @sentry/nextjs:

// sentry.client.config.ts
Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  tracesSampleRate: 0.1,
});

Error Alerts

Configure alerts in Sentry:

  1. Go to Settings → Alerts
  2. Create alert rule
  3. Set conditions (e.g., new errors)
  4. Configure notifications (email, Slack)

Uptime Monitoring

GitHub Actions checks endpoints every 5 minutes:

Monitored Endpoints

Endpoint Expected Status
https://api.freezedesign.nl/api/products/ 200
https://www.freezedesign.nl/ 200

Status Page

Public status page at: https://freezedesign.github.io/upptime/

Application Logs

View Backend Logs

# Docker logs
docker logs backend --tail 100 -f

# Application logs
docker exec backend tail -f /app/logs/django.log

Log Levels

Level Use Case
DEBUG Development only
INFO Normal operations
WARNING Unexpected but handled
ERROR Errors requiring attention
CRITICAL Service affecting

Health Checks

Backend Health

curl https://api.freezedesign.nl/api/health/

Response:

{
  "status": "healthy",
  "database": "ok",
  "cache": "ok"
}

Frontend Health

curl -I https://www.freezedesign.nl/

Performance Monitoring

Sentry Performance

Enable transaction tracing:

SENTRY_TRACES_SAMPLE_RATE = 0.1  # 10% of requests

View in: Sentry → Performance

Key Metrics

Metric Target Alert Threshold
Response time (p95) < 500ms > 2000ms
Error rate < 0.1% > 1%
Uptime > 99.9% < 99%

Runbook: High Error Rate

  1. Check Sentry for new errors
  2. Review recent deployments
  3. Check infrastructure status
  4. If needed, rollback deployment
# Rollback to previous version
docker-compose down
git checkout <previous-tag>
docker-compose up -d

Runbook: Slow Response Times

  1. Check database query performance
  2. Review cache hit rates
  3. Check for resource constraints
  4. Scale if needed
# Check database connections
docker exec postgres psql -c "SELECT * FROM pg_stat_activity;"

# Check Redis memory
docker exec redis redis-cli info memory