Related Components#
The Workflow Manager integrates with several other CCAT Data Center components. This page describes the integration points and dependencies.
graph TD
DT["data-transfer<br/>Archive & Stage"]
WM["workflow-manager<br/>Pipeline Orchestration"]
DB["ops-db<br/>Database Models"]
API["ops-db-api<br/>REST API"]
UI["ops-db-ui<br/>Web Frontend"]
SI["system-integration<br/>Deployment"]
Redis["Redis<br/>Task Broker"]
DT -->|"Shared broker<br/>Staging jobs"| Redis
WM -->|"Task dispatch"| Redis
WM -->|"Read/write models"| DB
API -->|"Pipeline endpoints"| DB
UI -->|"Pipeline dashboard"| API
SI -->|"Docker Compose"| WM
DT -->|"RawDataPackage<br/>records"| DB
style WM fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
ops-db#
The operations database is the single source of truth for all pipeline metadata. The Workflow Manager depends on ops-db for:
All pipeline models — Pipeline, ReductionStep, ExecutedReductionStep, DataProduct, ReductionSoftware, DataGrouping, etc.
Shared enums — RunStatus, TriggerType, DataProductType
Observation models — RawDataPackage, ObsUnit, Source, InstrumentModule (used by the filter engine for data grouping)
Location models — DataLocation, PhysicalCopy (for processing location assignment and output tracking)
data-transfer#
The Workflow Manager builds on data-transfer’s infrastructure:
Shared Redis broker — both systems use the same Redis for Celery task dispatch, with separate queue prefixes (
workflow.*vstransfer.*)Staging jobs — the workflow-manager reuses data-transfer’s
StagingJobmechanism to stage raw data from archives to HPC processing locationsPhysicalCopy lifecycle — intermediate and output products follow the same PRESENT → DELETION_POSSIBLE → DELETED lifecycle
Shared patterns — DatabaseConnection, HealthCheck, StructuredLogger, make_celery_task()
ops-db-api#
Operations Database API (ops-db-api)
The REST API exposes pipeline functionality to the frontend and external consumers:
/pipelines/software/— CRUD for ReductionSoftware and versions/pipelines/configs/— CRUD for ReductionStepConfig/pipelines/groupings/— CRUD for DataGrouping + resolve preview + presets/pipelines/— Pipeline CRUD + nested step management + trigger/pipelines/runs/— Run listing, detail, logs, cancel, retry/data-products/— Product listing + lineage tracking/ux/pipeline-dashboard— Aggregate statistics for the UI
ops-db-ui#
Operations Database UI (ops-db-ui)
The web frontend provides a pipeline dashboard showing:
Pipeline definitions with their steps and status
Run history with filtering by pipeline, step, and status
Data product browser with provenance lineage
DataGrouping management with interactive filter building and sub-group preview
system-integration#
CCAT System Integration Documentation
Deployment infrastructure:
docker-compose.staging.workflow-manager.yml— Docker Compose for stagingFour containers: trigger-manager, workflow-manager, result-manager, celery-worker
Environment variable configuration for each service
Volume mounts for pipeline workspace and Redis certificates
Redis#
Shared message broker for both data-transfer and workflow-manager Celery tasks:
Broker URL with TLS authentication
Queue isolation via routing configuration
Health check heartbeat storage
Local backend job tracking (for
HPC_BACKEND=local)