# Related Components
```{eval-rst}
.. verified:: 2026-03-07
:reviewer: Christof Buchbender
```
The Workflow Manager integrates with several other CCAT Data Center components. This
page describes the integration points and dependencies.
```{eval-rst}
.. mermaid::
graph TD
DT["data-transfer
Archive & Stage"]
WM["workflow-manager
Pipeline Orchestration"]
DB["ops-db
Database Models"]
API["ops-db-api
REST API"]
UI["ops-db-ui
Web Frontend"]
SI["system-integration
Deployment"]
Redis["Redis
Task Broker"]
DT -->|"Shared broker
Staging jobs"| Redis
WM -->|"Task dispatch"| Redis
WM -->|"Read/write models"| DB
API -->|"Pipeline endpoints"| DB
UI -->|"Pipeline dashboard"| API
SI -->|"Docker Compose"| WM
DT -->|"RawDataPackage
records"| DB
style WM fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
```
## ops-db
{doc}`/ops-db/docs/index`
The operations database is the single source of truth for all pipeline metadata.
The Workflow Manager depends on ops-db for:
- **All pipeline models** --- Pipeline, ReductionStep, ExecutedReductionStep,
DataProduct, ReductionSoftware, DataGrouping, etc.
- **Shared enums** --- RunStatus, TriggerType, DataProductType
- **Observation models** --- RawDataPackage, ObsUnit, Source, InstrumentModule
(used by the filter engine for data grouping)
- **Location models** --- DataLocation, PhysicalCopy (for processing location
assignment and output tracking)
## data-transfer
{doc}`/data-transfer/docs/index`
The Workflow Manager builds on data-transfer's infrastructure:
- **Shared Redis broker** --- both systems use the same Redis for Celery task dispatch,
with separate queue prefixes (`workflow.*` vs `transfer.*`)
- **Staging jobs** --- the workflow-manager reuses data-transfer's `StagingJob`
mechanism to stage raw data from archives to HPC processing locations
- **PhysicalCopy lifecycle** --- intermediate and output products follow the same
PRESENT → DELETION_POSSIBLE → DELETED lifecycle
- **Shared patterns** --- DatabaseConnection, HealthCheck, StructuredLogger,
make_celery_task()
## ops-db-api
{doc}`/ops-db-api/docs/index`
The REST API exposes pipeline functionality to the frontend and external consumers:
- `/pipelines/software/` --- CRUD for ReductionSoftware and versions
- `/pipelines/configs/` --- CRUD for ReductionStepConfig
- `/pipelines/groupings/` --- CRUD for DataGrouping + resolve preview + presets
- `/pipelines/` --- Pipeline CRUD + nested step management + trigger
- `/pipelines/runs/` --- Run listing, detail, logs, cancel, retry
- `/data-products/` --- Product listing + lineage tracking
- `/ux/pipeline-dashboard` --- Aggregate statistics for the UI
## ops-db-ui
{doc}`/ops-db-ui/docs/index`
The web frontend provides a pipeline dashboard showing:
- Pipeline definitions with their steps and status
- Run history with filtering by pipeline, step, and status
- Data product browser with provenance lineage
- DataGrouping management with interactive filter building and sub-group preview
## system-integration
{doc}`/system-integration/docs/index`
Deployment infrastructure:
- `docker-compose.staging.workflow-manager.yml` --- Docker Compose for staging
- Four containers: trigger-manager, workflow-manager, result-manager, celery-worker
- Environment variable configuration for each service
- Volume mounts for pipeline workspace and Redis certificates
## Redis
Shared message broker for both data-transfer and workflow-manager Celery tasks:
- Broker URL with TLS authentication
- Queue isolation via routing configuration
- Health check heartbeat storage
- Local backend job tracking (for `HPC_BACKEND=local`)