Introduction
Something breaks in production at 2am. You open your logs and find a wall of generic messages that tell you nothing useful. No request ID to trace through the stack. No user context. No indication of what state the application was in. Just Something went wrong repeated across a dozen lines with timestamps.
That's bad logging. Resilient logging is the opposite: logs that survive infrastructure failures, carry enough context to be useful, don't crash your application when the logging system itself breaks, and don't expose sensitive data in the process.
Django sits on top of Python's standard logging module, which gives you more power than most people use. This article walks through setting it up properly from the ground up.
How Django logging works
Python's logging system has four main pieces: loggers, handlers, filters, and formatters. Understanding how they connect is the foundation for everything else.
- A logger is the object your code calls. You get one with
logging.getLogger(__name__)and call.info(),.warning(),.error()on it. - A handler decides where the log record goes: a file, the console, an external service, an email.
- A formatter controls what the log record looks like when it's written out.
- A filter can inspect a log record and decide whether it should be processed at all.
Loggers are hierarchical, using dotted namespaces. A logger named myapp.views is a child of myapp, which is a child of the root logger. Records propagate up the tree unless you tell them not to. This hierarchy is what lets you configure logging for an entire app in one place.
import logging
logger = logging.getLogger(__name__)
# __name__ becomes "myapp.views" when this file is myapp/views.py
def create_order(request):
logger.info("Creating order", extra={"user_id": request.user.id})
try:
order = Order.objects.create(user=request.user)
logger.info("Order created", extra={"order_id": order.id})
return JsonResponse({"id": order.id})
except Exception:
logger.exception("Order creation failed")
return JsonResponse({"error": "Failed"}, status=500)A production-ready config
Django's default logging setup is minimal. In production you want more: separate log levels for different parts of the system, structured output, and logs that go somewhere durable. Here's a config that covers the basics well.
import os
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"verbose": {
"format": "{asctime} {levelname} {name} {message}",
"style": "{",
},
"json": {
"()": "pythonjsonlogger.jsonlogger.JsonFormatter",
"format": "%(asctime)s %(levelname)s %(name)s %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "verbose",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(BASE_DIR, "logs", "app.log"),
"maxBytes": 1024 * 1024 * 10, # 10 MB
"backupCount": 5,
"formatter": "json",
},
},
"root": {
"handlers": ["console"],
"level": "WARNING",
},
"loggers": {
"django": {
"handlers": ["console"],
"level": os.getenv("DJANGO_LOG_LEVEL", "INFO"),
"propagate": False,
},
"myapp": {
"handlers": ["console", "file"],
"level": "DEBUG",
"propagate": False,
},
},
}Note
disable_existing_loggers: False matters
Setting this to True silences all loggers configured before your settings are loaded, which includes loggers set up by third-party libraries. Keep it False unless you have a specific reason to disable them.
The RotatingFileHandler is important for long-running applications. Without it, your log file grows forever until it fills the disk. The config above caps each file at 10MB and keeps 5 rotated backups, giving you 50MB of log history before old entries start disappearing.
For the JSON formatter you will need to install python-json-logger:
pip install python-json-loggerAdding context to every log line
The most common complaint about logs is that they're hard to trace. You see an error but you don't know which user triggered it, which request it came from, or what endpoint they were hitting. The fix is to attach a request ID to every log line for the duration of a request.
A middleware that generates a request ID and stores it in a thread-local variable is the cleanest way to do this. Then a custom log filter reads from that variable and adds it to every record automatically.
import uuid
import logging
import threading
_request_context = threading.local()
def get_request_id():
return getattr(_request_context, "request_id", None)
class RequestIDMiddleware:
"""Attach a unique ID to every request and store it for logging."""
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
request_id = request.headers.get("X-Request-ID") or str(uuid.uuid4())
_request_context.request_id = request_id
response = self.get_response(request)
response["X-Request-ID"] = request_id
return responseimport logging
from .middleware import get_request_id
class RequestIDFilter(logging.Filter):
"""Add request_id to every log record."""
def filter(self, record):
record.request_id = get_request_id() or "no-request"
return TrueMIDDLEWARE = [
"core.middleware.RequestIDMiddleware",
# ... other middleware
]
LOGGING = {
...
"filters": {
"request_id": {
"()": "core.logging.RequestIDFilter",
},
},
"formatters": {
"verbose": {
"format": "{asctime} [{request_id}] {levelname} {name} {message}",
"style": "{",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "verbose",
"filters": ["request_id"],
},
},
...
}Now every log line looks like this:
# 2026-05-07 14:32:01 [3f2a1b9c-4d5e] INFO myapp.views Creating order
# 2026-05-07 14:32:01 [3f2a1b9c-4d5e] INFO myapp.views Order created
# 2026-05-07 14:32:01 [3f2a1b9c-4d5e] ERROR myapp.views Payment failed
# When something breaks, grep for the request ID and see the full story:
# grep "3f2a1b9c-4d5e" app.logStructured logging with structlog
Standard Python logging outputs strings. When you ship logs to a centralised service like Datadog, Loki, or CloudWatch Logs, string logs are hard to query. You end up writing regex patterns to extract values you should have logged as fields in the first place.
Structured logging outputs key-value pairs, usually as JSON. Every piece of context is a named field rather than an interpolated string, which makes queries like "show me all errors where order_total is over 1000" trivial.
structlog is the best way to add structured logging to Django. It wraps the standard library's logging module so you can use both together.
pip install structlogimport structlog
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.JSONRenderer(),
],
wrapper_class=structlog.stdlib.BoundLogger,
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
cache_logger_on_first_use=True,
)import structlog
logger = structlog.get_logger(__name__)
def create_order(request):
log = logger.bind(user_id=request.user.id, endpoint="create_order")
log.info("creating_order")
try:
order = Order.objects.create(user=request.user)
log.info("order_created", order_id=order.id, total=str(order.total))
return JsonResponse({"id": order.id})
except Exception as exc:
log.error("order_creation_failed", error=str(exc))
return JsonResponse({"error": "Failed"}, status=500)The JSON output from the above looks like this, which is trivially queryable:
{"event": "order_created", "order_id": 1042, "total": "149.99", "user_id": 88, "level": "info", "timestamp": "2026-05-07T14:32:01Z"}Logging exceptions properly
There's a right and wrong way to log exceptions. The wrong way loses the traceback. The right way keeps it.
import logging
logger = logging.getLogger(__name__)
# Wrong: logs the message but loses the traceback
try:
risky_operation()
except Exception as e:
logger.error(f"Something failed: {e}")
# Also wrong: same problem, just more verbose
try:
risky_operation()
except Exception as e:
logger.error("Something failed", extra={"error": str(e)})
# Right: logger.exception() captures the full traceback automatically
# Only call this inside an except block
try:
risky_operation()
except Exception:
logger.exception("risky_operation failed")
# Also right: explicit exc_info=True if you need to use logger.error
try:
risky_operation()
except Exception:
logger.error("risky_operation failed", exc_info=True)⚠ Gotcha
logger.exception() outside an except block logs nothing useful
logger.exception() calls sys.exc_info() to get the current exception. Outside an except block there is no current exception, so you get None. Always call it inside the except clause.
For Django views specifically, unhandled exceptions are already caught and logged by Django's own error handling. But for background tasks, management commands, and Celery workers, you need to be explicit because there's nothing to catch the exception above your code.
import logging
from celery import shared_task
logger = logging.getLogger(__name__)
@shared_task
def send_invoice(order_id):
try:
order = Order.objects.get(id=order_id)
email_invoice(order)
logger.info("invoice_sent", extra={"order_id": order_id})
except Order.DoesNotExist:
logger.warning("order_not_found", extra={"order_id": order_id})
except Exception:
logger.exception("invoice_send_failed", extra={"order_id": order_id})
raise # re-raise so Celery can retry or mark the task as failedWhat not to log
Logs are often the least-secured part of a system. They get shipped to third-party services, stored in S3 buckets with wide permissions, and accessed by people who don't need to see customer data. Before you log something, ask whether it belongs in a log at all.
Never log these:
- Passwords, API keys, tokens, or any credentials
- Full credit card numbers, CVVs, or bank account details
- Personal data you wouldn't want a support engineer to read (national ID numbers, passport numbers)
- Session tokens or authentication cookies
- Full request bodies if they might contain any of the above
Log these instead:
- User IDs (not usernames, not emails if they're considered PII in your jurisdiction)
- Masked versions of sensitive values:
card_last_fourinstead ofcard_number - Outcome states: "payment succeeded", "verification failed"
- Timing and performance data
# Don't do this
logger.info("Payment processed", extra={
"card_number": payment.card_number, # never
"cvv": payment.cvv, # never
"email": user.email, # probably not
})
# Do this
logger.info("Payment processed", extra={
"user_id": user.id,
"card_last_four": payment.card_number[-4:],
"amount": str(payment.amount),
"currency": payment.currency,
"processor_reference": payment.reference_id,
})Making the logging pipeline resilient
The last thing you want is your application going down because the logging system hit an error. A few configuration choices keep this from happening.
Buffer logs with MemoryHandler
When you're shipping logs to an external service, a network hiccup can block your application thread if the handler is synchronous. Wrapping your external handler in a MemoryHandler buffers log records in memory and flushes them in batches, decoupling the logging call from the network write.
LOGGING = {
...
"handlers": {
"external_service": {
"class": "logging.handlers.HTTPHandler",
"host": "logs.example.com",
"url": "/ingest",
"method": "POST",
},
"buffered": {
"class": "logging.handlers.MemoryHandler",
"capacity": 100, # flush after 100 records
"flushLevel": "ERROR", # also flush immediately on ERROR+
"target": "external_service",
},
},
...
}Always have a fallback handler
If every handler on a logger fails, Python's logging system will quietly swallow the error by default. This is usually fine, but it means logs disappear silently. Configure a lastResort handler or add a StreamHandler to your root logger so that if file or external handlers fail, output still reaches stdout where your container runtime or process supervisor can capture it.
LOGGING = {
...
"root": {
# This is the safety net. If a logger has no handlers or all its
# handlers fail, records still end up here.
"handlers": ["console"],
"level": "WARNING",
},
...
}✦ Tip
In containerised deployments, stdout is enough
If you're running Django in Docker or Kubernetes, writing to stdout and stderr is often all you need. Your orchestration layer ships those streams to wherever logs are stored. A simple StreamHandler on the root logger plus a structured formatter gets you most of the way there without any infrastructure complexity.
Summary
Good logging is what turns a 2am incident from a guessing game into a diagnosis. Here's what to walk away with.
- Use
logging.getLogger(__name__)in every module. The hierarchy gives you fine-grained control from settings. - Configure
RotatingFileHandlerso log files don't grow until they fill the disk. - Add a request ID middleware and filter so every log line for a request shares a traceable identifier.
- Use
structlogfor structured JSON output, especially if you're shipping logs to a centralised service. - Use
logger.exception()inside except blocks to capture the full traceback automatically. - Never log passwords, tokens, card numbers, or raw personal data. Log IDs and masked values instead.
- Always have a fallback handler on the root logger so records don't disappear silently when other handlers fail.
- In containers, writing to stdout with a structured formatter is often the simplest production-ready setup.