Debugging Memgraph

Why enable user debugging?

User-driven debugging helps improve Memgraph’s performance and reliability by providing diagnostic data from your environment. This data assists us in reproducing and resolving issues faster, especially for bugs that are hard to replicate.

To help with this, our containers come equipped with user-friendly debugging tools, empowering you to identify and report problems more effectively.

Chose the right debug image

Memgraph provides Docker images built in mode, including tools like , and . This image is about 10% slower but enables detailed debugging.

To pull a debug image:

For Memgraph MAGE:

All the images in the build mode have the suffix .

Run Memgraph in debug mode

Run the Memgraph container in privileged mode to allow debugging tools like and to function:

Below, we can see an example command of how to run a Memgraph container in the privileged mode:

Accessing the container

All debugging is performed inside the container. To enter the container, you need to execute the following command.

The command is there to enable root privileges inside the container, necessary for running debugging tools.

Debugging tools overview

Memgraph supports the following debug capabilities:

  1. Using GDB: Attaching Memgraph with and inspecting threads
  2. Generating a core dump after Memgraph crashed
  3. Running Memgraph in GDB inside Docker: Directly running Memgraph or MAGE in GDB inside a Docker container.
  4. Using to identify performance bottlenecks

Using GDB

and are already installed packages in the Memgraph container that has the debug symbols. Since Memgraph is already running there at port , you can attach to your running Memgraph with the following command:

Most likely, the Memgraph process will have the PID number 1, but for certainty, we use .

Useful GDB commands:

In Memgraph, we usually want to see first what are all the threads currently in place. We do that by issuing:

By identifying a certain thread with a code that could belong to the Memgraph repository, we can issue the command

where is the specific thread number.

Seeing the backtrace can be done with the command

Generating core dump via Docker

In order to generate a core dump, you need to do a few steps on the host and in the container image.

Generating core dump via Docker Compose

The setup with Docker Compose is similar to Docker. You will need to bind the volume, run Memgraph in privileged mode, and make sure you set no size limit on the generated core dump.

Below we can see an example Docker Compose file which can generate a core dump:

Running Memgraph in GDB inside Docker

To run Memgraph or MAGE in GDB inside a Docker container, you can use the following commands to override the entry point and create a bind mount for core dumps:

Optionally, the core dump path can be overridden by setting the variable:

In the case where Memgraph crashes, a core dump will be created in the directory specified by the variable ( by default) and the full backtrace will be printed to the terminal.

Using with Docker

All images come with installed. You can use it to track the memory usage of Memgraph.

Before starting the container, create a directory to store the heaptrack data:

Then start the container with the following command:

To stop gracefully:

Then the GUI can be used to inspect the heaptrack data on the host machine:

Profiling with

Perfing is the most common operation that is run when Memgraph is hanging or performing slowly.

In the next steps, the instructions are provided on how to check which parts of Memgraph are stalling during query execution, so we can use the information to make the system better.

Prior to performing perf instructions, you will need to bound the Memgraph binary to the local filesystem. You can start Memgraph with the volume binded like this:

Debugging Memgraph under Kubernetes (k8s)

General commands

To being with, the master of all kubectl commands is:

Managing nodes:

Managing pods:

Events:

Cluster:

StatefulSets:

Debugging Memgraph pods

You can attach GDB to a running Memgraph pod using ephemeral debug containers. This approach injects a debug container into an existing pod — no need to redeploy or create a separate privileged pod.

Requirements: kubectl 1.32+, Kubernetes 1.25+ (ephemeral containers must be enabled).

Kubernetes official documentation on how to debug running pods covers additional techniques including node-level debugging.

Handling core dumps

When Memgraph crashes, for example, due to segmentation faults (), core dumps can provide invaluable insight for debugging. The Memgraph Helm charts provide an easy way to enable persistent core dump storage using the option.

To enable core dumps, create a file with at least the following setting:

If you’re running the Memgraph high-availability chart, you can automatically upload core dumps to S3.

Setting this value to true will also enable the use of GDB inside Memgraph containers when using our provided charts.

This instructs the Helm chart to create a (PVC) to store core dumps generated by the Memgraph process.

By default the storage size is 10GiB. Core dumps can be as large as your node’s total RAM, so it’s recommended to set this explicitly and make sure to adjust the under file.

Make sure to use the image of Memgraph by setting the also under file.

Run the following command to install Memgraph with the debugging configuration:

The core dumps are written to a mounted volume inside the container (the default is , it’s possible to tweak that by changing the under ). You can use or to access the files for post-mortem analysis.

If you have k8s cluster under any major cloud provider + you want to store the dumps under S3, probably the best repo to check out is the core-dump-handler.

Profiling Memgraph in Kubernetes

Profile a Memgraph process running inside a Kubernetes pod using and generate flame graphs.

Prerequisites

  • configured with access to your cluster
  • A running Memgraph deployment (standalone or HA)

Step 1: Identify the target pod

In this example, we want to profile , which is currently the MAIN instance. Note the NODE it is running on — the debug pod must be scheduled on the same node.

Step 2: Deploy the debug pod

Edit and set to match the target pod’s node:

The debug pod needs and so it can see host processes and access to match processes to pods.

Step 3: Find the Memgraph PID

Since multiple Memgraph processes may be visible from the host PID namespace (due to Kubernetes multi-tenancy), we need to match the correct one to our target pod. The script does this automatically — it resolves the pod’s UID, lists all processes inside the debug pod, and matches via :

Output:

Use to specify a different debug pod name, or for a non-default namespace:

Step 4: Install perf in the debug pod

Inside the debug pod:

Note (AKS / cloud kernels): will fail if the host kernel is a cloud-specific variant (e.g., ) because the matching package isn’t in standard Ubuntu repos. Use instead — the generic binary works in most cases. If it complains about a version mismatch, invoke it directly:

Step 5: Record a perf profile

Replace with the PID from Step 3. Adjust the duration () as needed — run your workload during this window.

Step 6: Generate a flame graph

Step 7: Copy results and clean up

From your local machine:

Open in a browser to explore the interactive flame graph.

Specific cloud provider instructions

The k8s quick reference is an amazing set of commands!

ConfigurationEnabling Memgraph Enterprise