Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Worker Crates Overview

Last Updated: 2025-12-27 Audience: Developers, Architects, Operators Status: Active Related Docs: Worker Event Systems | Worker Actors

<- Back to Documentation Hub


The tasker-core workspace provides four worker implementations for executing workflow step handlers. Each implementation targets different deployment scenarios and developer ecosystems while sharing the same core Rust foundation.

Quick Navigation

DocumentDescription
API Convergence MatrixQuick reference for aligned APIs across languages
Client Wrapper APIHigh-level client for submitting tasks (Ruby, Python, TypeScript)
Example HandlersSide-by-side handler examples
Patterns and PracticesCommon patterns across all workers
Rust WorkerNative Rust implementation
Ruby WorkerRuby gem for Rails integration
Python WorkerPython package for data pipelines
TypeScript WorkerTypeScript/JS for Bun/Node.js

Overview

Four Workers, One Foundation

All workers share the same Rust core (tasker-worker crate) for orchestration, queueing, and state management. The language-specific workers add handler execution in their respective runtimes.

┌─────────────────────────────────────────────────────────────────────────────┐
│                           WORKER ARCHITECTURE                                 │
└─────────────────────────────────────────────────────────────────────────────┘

                              PostgreSQL + PGMQ
                                      │
                                      ▼
                    ┌─────────────────────────────┐
                    │   Rust Core (tasker-worker) │
                    │   ─────────────────────────│
                    │   • Queue Management        │
                    │   • State Machines          │
                    │   • Orchestration           │
                    │   • Actor System            │
                    └─────────────────────────────┘
                                      │
          ┌───────────────┬───────────┼───────────┬───────────────┐
          │               │           │           │               │
          ▼               ▼           ▼           ▼               ▼
    ┌───────────┐   ┌───────────┐   ┌───────────┐   ┌─────────────┐
    │   Rust    │   │   Ruby    │   │  Python   │   │ TypeScript  │
    │  Worker   │   │  Worker   │   │  Worker   │   │   Worker    │
    │───────────│   │───────────│   │───────────│   │─────────────│
    │ Native    │   │ FFI Bridge│   │ FFI Bridge│   │ FFI Bridge  │
    │ Handlers  │   │ + Gem     │   │ + Package │   │ Bun/Node.js  │
    └───────────┘   └───────────┘   └───────────┘   └─────────────┘

Comparison Table

FeatureRustRubyPythonTypeScript
PerformanceNativeGVL-limitedGIL-limitedV8/Bun native
IntegrationStandaloneRails/Rack appsData pipelinesNode/Bun apps
Handler StyleAsync traitsClass-basedABC-basedClass-based
ConcurrencyTokio asyncThread + FFI pollThread + FFI pollEvent loop + native addon
DeploymentBinaryGem + ServerPackage + ServerPackage + Server
Headless ModeN/ALibrary embedLibrary embedLibrary embed
Runtimes-MRICPythonBun (primary), Node.js

When to Use Each

Rust Worker - Best for:

  • Maximum throughput requirements
  • Resource-constrained environments
  • Standalone microservices
  • Performance-critical handlers

Ruby Worker - Best for:

  • Rails/Ruby applications
  • ActiveRecord/ORM integration
  • Existing Ruby codebases
  • Quick prototyping with Ruby ecosystem

Python Worker - Best for:

  • Data processing pipelines
  • ML/AI integration
  • Scientific computing workflows
  • Python-native team preferences

TypeScript Worker - Best for:

  • Modern JavaScript/TypeScript applications
  • Full-stack Node.js teams
  • High-performance Bun deployments
  • React/Vue/Angular backend services
  • Native addon integration via napi-rs

Deployment Modes

Server Mode

All workers can run as standalone servers:

Rust:

cargo run -p workers-rust

Ruby:

cd workers/ruby
./bin/server.rb

Python:

cd workers/python
python bin/server.py

TypeScript (Bun):

cd workers/typescript
bun run bin/server.ts

TypeScript (Node.js):

cd workers/typescript
npx tsx bin/server.ts

Headless/Embedded Mode (Ruby, Python & TypeScript)

Ruby, Python, and TypeScript workers can be embedded into existing applications without running the HTTP server. Headless mode is controlled via TOML configuration, not bootstrap parameters.

TOML Configuration (e.g., config/tasker/base/worker.toml):

[web]
enabled = false  # Disables HTTP server for headless/embedded mode

Ruby (in Rails):

# config/initializers/tasker.rb
require 'tasker_core'

# Bootstrap worker (web server disabled via TOML config)
TaskerCore::Worker::Bootstrap.start!

# Register handlers
TaskerCore::Registry::HandlerRegistry.instance.register_handler(
  'MyHandler',
  MyHandler
)

Python (in application):

from tasker_core import bootstrap_worker, HandlerRegistry
from tasker_core.types import BootstrapConfig

# Bootstrap worker (web server disabled via TOML config)
config = BootstrapConfig(namespace="my-app")
bootstrap_worker(config)

# Register handlers
registry = HandlerRegistry.instance()
registry.register("my_handler", MyHandler)

TypeScript (in application):

import { WorkerServer } from '@tasker-systems/tasker';

// Bootstrap worker (web server disabled via TOML config)
const server = new WorkerServer();
await server.start({ namespace: 'my-app' });

// Register handlers
const handlerSystem = server.getHandlerSystem();
handlerSystem.register('my_handler', MyHandler);

Core Concepts

1. Handler Registration

All workers use a registry pattern for handler discovery:

                    ┌─────────────────────┐
                    │  HandlerRegistry    │
                    │  (Singleton)        │
                    └─────────────────────┘
                              │
              ┌───────────────┼───────────────┐
              │               │               │
              ▼               ▼               ▼
         ┌─────────┐    ┌─────────┐    ┌─────────┐
         │Handler A│    │Handler B│    │Handler C│
         └─────────┘    └─────────┘    └─────────┘

2. Event Flow

Step events flow through a consistent pipeline:

1. PGMQ Queue → Event received
2. Worker claims step (atomic)
3. Handler resolved by name
4. Handler.call(context) executed
5. Result sent to completion channel
6. Orchestration receives result

3. Error Classification

All workers distinguish between:

  • Retryable Errors: Transient failures → Re-enqueue step
  • Permanent Errors: Unrecoverable → Mark step failed

4. Graceful Shutdown

All workers handle shutdown signals (SIGTERM, SIGINT):

1. Signal received
2. Stop accepting new work
3. Complete in-flight handlers
4. Flush completion channel
5. Shutdown Rust foundation
6. Exit cleanly

Configuration

Environment Variables

Common across all workers:

VariableDescriptionDefault
DATABASE_URLPostgreSQL connection stringRequired
TASKER_ENVEnvironment (test/development/production)development
TASKER_CONFIG_PATHPath to TOML configurationAuto-detected
TASKER_TEMPLATE_PATHPath to task templatesAuto-detected
TASKER_NAMESPACEWorker namespace for queue isolationdefault
RUST_LOGLog level (trace/debug/info/warn/error)info

Language-Specific

Ruby:

VariableDescription
RUBY_GC_HEAP_GROWTH_FACTORGC tuning for production

Python:

VariableDescription
PYTHON_HANDLER_PATHPath for handler auto-discovery

Handler Types

All workers support specialized handler types:

StepHandler (Base)

Basic step execution:

from tasker_core.step_handler.functional import step_handler, inputs

@step_handler("my_handler")
@inputs(MyInputModel)
def my_handler(inputs: MyInputModel, context):
    return {"result": "done"}

See Class-Based Handlers for the inheritance-based alternative.

ApiHandler

HTTP/REST API integration with automatic error classification:

extend TaskerCore::StepHandler::Functional

FetchDataHandler = api_handler(
  'FetchDataHandler',
  base_url: 'https://api.example.com',
  inputs: [:user_id]
) do |user_id:, api:, context:|
  response = api.get("/users/#{user_id}")
  api.api_success(result: { user: response.body })
end

See Class-Based Handlers for the inheritance-based alternative.

DecisionHandler

Dynamic workflow routing:

from tasker_core.step_handler.functional import decision_handler, Decision

@decision_handler("routing_decision")
@inputs('amount')
def routing_decision(amount, context):
    if float(amount or 0) < 1000:
        return Decision.route(['auto_approve'], route_type='automatic')
    return Decision.route(['manager_approval'], route_type='manager')

See Class-Based Handlers for the inheritance-based alternative.

Batchable

Large dataset processing with separate analyzer and worker handlers:

import { defineBatchAnalyzer, defineBatchWorker, BatchConfig } from '@tasker-systems/tasker';

export const CsvAnalyzer = defineBatchAnalyzer(
  'Csv.StepHandlers.CsvAnalyzerHandler',
  { inputs: { csvPath: 'csv_path' } },
  async ({ csvPath }) => ({
    totalItems: await countRows(csvPath as string),
    batchSize: 100,
  }),
);

export const CsvWorker = defineBatchWorker(
  'Csv.StepHandlers.CsvWorkerHandler',
  { analyzerStep: 'analyze_csv' },
  async ({ batchContext }) => ({ processed: batchContext.batchSize }),
);

See Class-Based Handlers for the inheritance-based alternative.


Quick Start

Rust

# Build and run
cd workers/rust
cargo run

# With custom configuration
TASKER_CONFIG_PATH=/path/to/config.toml cargo run

Ruby

# Install dependencies
cd workers/ruby
bundle install
bundle exec rake compile

# Run server
./bin/server.rb

Python

# Install dependencies
cd workers/python
uv sync
uv run maturin develop

# Run server
python bin/server.py

TypeScript

# Install dependencies
cd workers/typescript
bun install
bun run build:napi   # Build napi-rs native addon
bun run build        # Build TypeScript

# Run server (Bun)
bun run bin/server.ts

# Run server (Node.js)
npx tsx bin/server.ts

Monitoring

Health Checks

All workers expose health status:

# Python
from tasker_core import get_health_check
health = get_health_check()
# Ruby
health = TaskerCore::FFI.health_check

Metrics

Common metrics available:

MetricDescription
pending_countEvents awaiting processing
in_flight_countEvents being processed
completed_countSuccessfully completed
failed_countFailed events
starvation_detectedProcessing bottleneck

Logging

All workers use structured logging:

2025-01-15T10:30:00Z [INFO] python-worker: Processing step step_uuid=abc-123 handler=process_order
2025-01-15T10:30:01Z [INFO] python-worker: Step completed step_uuid=abc-123 success=true duration_ms=150

Architecture Deep Dive

For detailed architectural documentation:


See Also