Workflows — Visual Healthcare Automation Builder
Modern healthcare requires more than just storing or analyzing data — it needs automated, intelligent coordination across systems, models, and processes.
ByteEngine's Workflows bring this power to your fingertips — allowing you to visually build, configure, and deploy end-to-end healthcare automations using FHIR, DICOM, AI Workers, and Pipelines.
What Are Workflows?
A Workflow in ByteEngine is a connected sequence of actions — each action powered by one or more components:
- AI Workers (to reason, respond, and act)
- Pipelines (to connect legacy data sources)
- Sessions & Tasks (to maintain context and orchestrate logic)
- FHIR/DICOM Servers (for compliant healthcare data exchange)
- Tools & Models (for AI inference or utility functions)
Each workflow defines what happens, when, and how — all in a structured, HIPAA-compliant, and reusable way.
Real-World Example
Imagine automating the process of radiology triage:
- A new DICOM image is uploaded
- A Worker runs an AI model to classify the image (e.g., chest X-ray → pneumonia)
- The result is stored in the FHIR Observation resource
- A bot sends a message to the on-call radiologist via Slack
- All data is logged in a Session for traceability
This entire flow — across multiple systems and data formats — can be built visually using the ByteEngine Workflow Builder.
The Visual Workflow Builder (No-Code Interface)
ByteEngine's Workflow Builder is an interactive, drag-and-drop editor where you can visually define healthcare automation.
Steps in the Builder:
- Navigate to Automation → Workflows → Create Workflow
- Drag components from the sidebar:
- Worker
- FHIR Server
- Pipeline
- Session
- Task
- Model
- Trigger
- Connect components with arrows to define execution flow
- Configure logic using natural language or YAML
Example Layout
[Trigger: New DICOM Upload]
↓
[AI Worker: Radiology Assistant]
↓
[Model: Image Classifier]
↓
[FHIR Server: Observation/Report Creation]
↓
[Notification Bot: Slack Message]
Each block represents a node, and connections define data flow and execution order.
Triggers
A Trigger defines when a workflow starts. You can choose from several trigger types:
| Trigger Type | Example Use Case |
|---|---|
| FHIR Event | When a new Observation or Patient is created |
| Schedule | Run every 6 hours to sync legacy data |
| External Webhook | Trigger from external systems |
| Manual / On-Demand | Start via button in dashboard or API |
| File Upload / DICOM Upload | Start on new image or document ingestion |
Example:
trigger:
type: fhir
event: "Patient.create"
Building a Workflow via YAML
While you can use the visual builder, you can also define workflows declaratively for automation and CI/CD pipelines.
Example: Radiology AI Workflow
workflow:
id: radiology-ai-workflow
trigger:
type: dicom
event: "NewImageUploaded"
steps:
- id: analyze-image
type: worker
worker: radiology-assistant
input:
file: "{{trigger.file}}"
output: analysisResult
- id: store-observation
type: fhir
action: create
resourceType: Observation
data:
status: final
category: imaging
code:
text: "AI Analysis Result"
subject:
reference: "Patient/{{trigger.patientId}}"
valueString: "{{steps.analyze-image.output.analysisResult}}"
- id: notify-radiologist
type: bot
tool: Slack.send
params:
channel: "#radiology"
message: "AI detected potential issue for Patient {{trigger.patientId}}"
This workflow listens for a new DICOM upload, analyzes the image with an AI worker, stores the output in FHIR, and alerts the radiologist.
Using JavaScript SDK
import { EngineClient } from '@boolbyte/engine';
const client = new EngineClient({ apiKey: 'YOUR_API_KEY' });
// Create a workflow
const workflow = await client.workflow.createWorkflow({
name: 'radiology-ai-workflow',
trigger: {
type: 'dicom',
event: 'NewImageUploaded'
},
steps: [
{
id: 'analyze-image',
type: 'worker',
worker: 'radiology-assistant',
input: {
file: '{{trigger.file}}'
}
},
{
id: 'store-observation',
type: 'fhir',
action: 'create',
resourceType: 'Observation',
data: {
status: 'final',
category: 'imaging',
code: {
text: 'AI Analysis Result'
},
subject: {
reference: 'Patient/{{trigger.patientId}}'
},
valueString: '{{steps.analyze-image.output.analysisResult}}'
}
}
]
});
console.log('Workflow created:', workflow.data.id);
Workflows & Context: Behind the Scenes
Every workflow automatically uses Sessions under the hood. This ensures:
- Context persists between steps
- AI outputs are traceable
- All actions are logged (for auditing)
- Rollbacks can occur if something fails
So even if your workflow runs across multiple servers or models, it remains stateful and recoverable.
Executing a Workflow via API
curl -X POST https://api.engine.boolbyte.com/api/workflows/radiology-ai-workflow/execute \
-H "Authorization: Bearer <ACCESS_TOKEN>" \
-H "Content-Type: application/json" \
-d '{
"file": "dicom://uploads/study123.dcm",
"patientId": "Patient/789"
}'
Response:
{
"success": true,
"data": {
"runId": "run_5678",
"status": "running",
"message": "Workflow execution started",
"createdAt": "2024-01-15T10:00:00.000Z"
}
}
You can track the run via:
curl -X GET https://api.engine.boolbyte.com/api/workflows/runs/run_5678 \
-H "Authorization: Bearer <ACCESS_TOKEN>"
Reusable Components
ByteEngine Workflows are composable — meaning you can reuse existing:
- Pipelines
- Workers
- Sessions
- Models
- Triggers
For example, one workflow can reuse another's FHIR data ingestion pipeline or share an AI worker configuration.
Security, Logging, and Audit
Each workflow run is:
- Encrypted end-to-end
- Logged for auditability
- Fully traceable (view step-by-step history)
- Isolated by tenant and data residency zone
Audit logs can be exported for compliance review:
curl -X GET https://api.engine.boolbyte.com/api/workflows/runs/run_5678/audit \
-H "Authorization: Bearer <ACCESS_TOKEN>"
Monitoring Workflows
The ByteEngine Dashboard provides:
- Real-time run status
- Step-level execution times
- Retry & rollback options
- Visual analytics for workflow performance
You can even replay past runs for debugging or audit verification.
Common Use Cases
| Use Case | Description |
|---|---|
| AI Diagnostics Automation | Trigger models automatically when new imaging arrives |
| FHIR Event Reactions | Run workflows when a new Observation or Encounter is created |
| Data Synchronization | Schedule pipelines to sync data between EMRs and FHIR servers |
| Clinical Decision Support | Use Workers to generate AI recommendations on patient updates |
| Administrative Tasks | Auto-generate discharge summaries or notify staff |
Architecture Overview
+---------------------------------------+
| Workflow Engine |
|---------------------------------------|
| Triggers | Steps | Context | Logging |
+--------------------+------------------+
|
v
+-----------------+------------------+
| Connected Components |
| (FHIR Servers, Workers, Pipelines) |
+-------------------------------------+
Real-World Implementation Examples
AI Diagnostics Automation
workflow:
id: ai-diagnostics-workflow
trigger:
type: fhir
event: "Observation.create"
filter: "category=imaging"
steps:
- id: analyze-image
type: worker
worker: diagnostic-assistant
model: medgemma-27b
input:
observation: "{{trigger.resource}}"
- id: create-diagnostic-report
type: fhir
action: create
resourceType: DiagnosticReport
data:
status: final
category: imaging
subject: "{{trigger.resource.subject}}"
conclusion: "{{steps.analyze-image.output.diagnosis}}"
Data Synchronization Workflow
workflow:
id: emr-sync-workflow
trigger:
type: schedule
cron: "0 */6 * * *" # Every 6 hours
steps:
- id: sync-patients
type: pipeline
pipeline: legacy-emr-pipeline
action: sync
resourceType: Patient
- id: sync-observations
type: pipeline
pipeline: lab-results-pipeline
action: sync
resourceType: Observation
Best Practices
| Goal | Recommendation |
|---|---|
| Maintain readability | Use descriptive step IDs and names |
| Ensure resilience | Add retries and error-handling steps |
| Secure workflows | Restrict access to sensitive triggers |
| Version control | Export workflow YAML to Git |
| Observe execution | Use monitoring dashboard for debugging |
Next Steps
Next Section → Apps, Bots & Subscriptions — Extending ByteEngine Functionality
Learn how to extend ByteEngine with custom applications, bots, and event-driven subscriptions.
- Workflows API Reference - Complete API documentation
- AI Workers Guide - Create intelligent healthcare agents
- Pipelines Guide - Connect legacy data sources
- Quick Start Guide - Build your first healthcare application