Closing DB connections that are unused by rossops · Pull Request #14452 · DefectDojo/django-DefectDojo

Description

Celery workers are long-lived processes without Django's request/response lifecycle. Django normally closes stale DB connections via its request middleware, but Celery never triggers that. As a result:

  • Connections accumulate and are never returned to the pool
  • CONN_MAX_AGE timeouts are never enforced between tasks

For users with high frequency usage, this can cause the DB to consume a high amount of resources due to idle connection build up.

Fix: Two signal handlers on task_prerun and task_postrun that call close_old_connections():

  • task_prerun — closes stale connections before each task runs. This handles the case where a connection was left open by a previous task and may have timed out at the DB level (e.g., PostgreSQL idle_in_transaction_session_timeout). Using a dead connection would cause an error; this preempts it.
  • task_postrun — closes connections after each task. This is the primary cleanup path, ensuring connections aren't held open between tasks.
  • close_old_connections() respects CONN_MAX_AGE: with the default of 0 it closes all connections; with a non-zero value it only closes connections older than the configured age.

I've set this to 300s versus the default of 0. I should note that it will not close a connection thats older than 300, rather, a connection that hasnt been used in 300s. An important distinction.