A Temporal plugin that exports OpenTelemetry traces, logs, and metrics directly to Parseable — no intermediate OTel Collector required.
Architecture
┌──────────────┐ OTLP/HTTP ┌──────────────┐
│ Temporal │ ──────────────── │ Parseable │
│ Worker │ traces / logs │ │
│ + Plugin │ / metrics │ temporal-* │
└──────────────┘ └──────────────┘
The plugin uses SimplePlugin from the Temporal SDK and TracingInterceptor from temporalio.contrib.opentelemetry to capture distributed traces across workflow and activity boundaries. Python logging calls from workflows and activities are bridged to OTel log records.
Quick Start
1. Install
2. Start Infrastructure
cd docker
docker compose up -dThis starts Parseable (:8000) and Temporal (:7233).
3. Run the Demo
# Terminal 1 — worker cd demo python worker.py # Terminal 2 — client cd demo python client.py
4. Verify in Parseable
Open http://localhost:8000 (admin/admin) and check the streams:
SELECT trace_id, span_id, service_name, operation_name FROM "temporal-traces" ORDER BY p_timestamp DESC LIMIT 20; SELECT p_timestamp, severity_text, body, service_name FROM "temporal-logs" ORDER BY p_timestamp DESC LIMIT 20; SELECT * FROM "temporal-metrics" LIMIT 20;
Usage
from temporalio.client import Client from temporalio.worker import Worker from temporal_parseable import ParseablePlugin, ParseableConfig config = ParseableConfig() plugin = ParseablePlugin(config) runtime = plugin.create_runtime() client = await Client.connect( config.temporal_host, plugins=[plugin], runtime=runtime, ) async with Worker( client, task_queue="my-queue", workflows=[MyWorkflow], activities=[my_activity], plugins=[plugin], ): await asyncio.Event().wait()
Configuration
All settings are configurable via environment variables with the PARSEABLE_ prefix:
| Variable | Default | Description |
|---|---|---|
PARSEABLE_URL |
http://localhost:8000 |
Parseable server URL |
PARSEABLE_USERNAME |
admin |
Parseable username |
PARSEABLE_PASSWORD |
admin |
Parseable password |
PARSEABLE_TRACES_STREAM |
temporal-traces |
Stream name for trace data |
PARSEABLE_LOGS_STREAM |
temporal-logs |
Stream name for log data |
PARSEABLE_METRICS_STREAM |
temporal-metrics |
Stream name for metric data |
PARSEABLE_TEMPORAL_HOST |
localhost:7233 |
Temporal server address |
PARSEABLE_TEMPORAL_NAMESPACE |
default |
Temporal namespace |
PARSEABLE_SERVICE_NAME |
temporal-worker |
OTel service name |
PARSEABLE_ENABLE_TRACES |
true |
Enable trace export |
PARSEABLE_ENABLE_LOGS |
true |
Enable log export |
PARSEABLE_ENABLE_METRICS |
true |
Enable metric export |
PARSEABLE_ENABLE_UI |
true |
Enable embedded admin UI |
PARSEABLE_UI_PORT |
8100 |
Port for admin UI |
Copy .env.example and modify as needed:
Configuration UI
A web-based admin panel is available for configuring and monitoring the plugin. Install the UI extra:
Embedded Mode (default)
When the worker starts, the admin UI is automatically available on port 8100 with live status:
cd demo && python worker.py # Admin UI at http://localhost:8100 — shows green "Worker Active" banner
The UI shows live worker status (uptime, active signals) and pre-populates the form from the running config. Saving writes a .env file that takes effect on worker restart.
Disable with PARSEABLE_ENABLE_UI=false or change the port with PARSEABLE_UI_PORT=9000.
Standalone Mode
For initial setup before Temporal is running, use the standalone command:
temporal-parseable-ui
# Admin UI at http://localhost:8100 — shows orange "Setup Mode" bannerConfigure connection details, test connectivity, and save a .env file. Then start the worker to activate the plugin.
Collector Mode
For environments that already run an OTel Collector:
cd docker
docker compose -f docker-compose.yaml -f docker-compose.collector.yaml up -dThen point your worker at the collector instead of Parseable directly:
export PARSEABLE_URL=http://localhost:4318Development
pip install -e ".[dev]"
pytestLicense
MIT