Skip to main content

Asset Processor Service

Node.js microservice for processing 3D model assets — generating thumbnails, animated previews, waveforms, and more. Operates independently from the .NET Web API, communicating via REST API and SignalR.

Quick Start

Prerequisites

  • Node.js 18+
  • FFmpeg
  • Backend Web API running

Setup & Run

cd src/asset-processor
npm install
cp .env.example .env
# Edit .env with your API URL
npm start

Health check: curl http://localhost:3001/health

Key Features

  • Real-time processing - SignalR notifications for instant job pickup
  • 3D format support - OBJ, FBX, GLTF, GLB and more via Three.js loaders
  • Blender .blend conversion - Headless Blender converts .blend files to .glb before rendering (x64 Blender via QEMU on ARM64)
  • Orbit animations - Smooth 360° rotation at configurable angles
  • WebP encoding - Animated WebP with configurable quality/framerate
  • API-based storage - Uploads to backend, avoiding filesystem permission issues
  • Hash deduplication - Backend handles duplicate detection
  • Multiple workers - Supports concurrent instances with load balancing
  • Health monitoring - HTTP endpoints for container orchestration

How It Works

1. Upload → Backend creates job and sends SignalR notification
2. Worker → Receives notification via SignalR
3. Worker → Claims job via POST /thumbnail-jobs/dequeue
4. Worker → Dispatches to processor based on asset type (Model/Sound/TextureSet)
5. Worker → Downloads asset files from API
5.5. Worker → If .blend file, converts to .glb via headless Blender (export_glb.py)
5.6. Worker → Uploads converted .glb back to model version
6. Worker → Loads and normalizes asset (3D model / sphere geometry / audio)
7. Worker → Generates orbit animation frames (360°) for models, swing animation frames for texture sets, or waveform for sounds
8. Worker → Encodes frames to animated WebP + poster image
9. Worker → Uploads thumbnail to API
10. Worker → (TextureSet only) Generates web proxy textures at the configured resolution (from `/settings` API `textureProxySize`, or overridden by `job.proxySize` if set) and uploads via PUT /texture-sets/{id}/textures/{textureId}/web-proxy
11. Worker → Reports completion status

Configuration

Create .env from .env.example:

Essential Settings

# API Connection
API_BASE_URL=http://localhost:5009

# Worker Identification
WORKER_ID=worker-1
WORKER_PORT=3001

# Processing
MAX_CONCURRENT_JOBS=3

# Rendering
RENDER_WIDTH=256
RENDER_HEIGHT=256
RENDER_FORMAT=png

Advanced Settings

# Orbit Animation
ENABLE_ORBIT_ANIMATION=true
ORBIT_ANGLE_STEP=12 # 360/12 = 30 frames
CAMERA_HEIGHT_MULTIPLIER=0.75

# Encoding
ENABLE_FRAME_ENCODING=true
ENCODING_FRAMERATE=10
WEBP_QUALITY=75
JPEG_QUALITY=85

# Logging
LOG_LEVEL=info # debug, info, warn, error
LOG_FORMAT=pretty # pretty or json

# Cleanup
CLEANUP_TEMP_FILES=true

Docker Deployment

Docker Compose

# Start worker
docker compose up -d asset-processor

# Scale to 3 workers
docker compose up -d --scale asset-processor=3

# View logs
docker compose logs -f asset-processor

# Check health
curl http://localhost:3001/health

Standalone Docker

docker build -t modelibr-asset-processor src/asset-processor
docker run -d \
-e API_BASE_URL=http://host.docker.internal:5009 \
-e WORKER_ID=worker-1 \
-p 3001:3001 \
modelibr-asset-processor

Development

Local Development

npm run dev  # Auto-reload on file changes

Manual Testing

# Test scripts are in the tests/ subdirectory
node tests/test-puppeteer.js
node tests/test-scene-cleanup.js

# Test with debug logging
export LOG_LEVEL=debug
npm start

# Keep temp files for inspection
export CLEANUP_TEMP_FILES=false
npm start

Unit Tests (Vitest)

The asset processor uses Vitest for unit testing with 33 tests across 5 test files:

Test FileTestsWhat It Covers
tests/processorRegistry.test.js11Processor registration, lookup, strategy dispatch
tests/baseProcessor.test.js8Base processor contract, lifecycle hooks
tests/meshProcessor.test.js83D mesh processing pipeline
tests/config.test.js3Configuration loading and validation
tests/healthServer.test.js3Health endpoint responses
# Run all unit tests
npm test

# Watch mode (re-runs on file changes)
npm run test:watch

# Run with coverage report
npm run test:coverage

Code Quality

npm run lint        # Check code style
npm run lint:fix # Fix code style issues
npm run format # Format with Prettier

CI Integration

The asset-processor-quality job in GitHub Actions runs linting, and the asset-processor-tests job runs the full Vitest suite on every PR.

Troubleshooting

Container Won't Start

"exec /app/docker-entrypoint.sh: no such file or directory"

This error occurs due to Windows line ending (CRLF) issues. The repository includes two fixes:

  1. .gitattributes enforces LF endings for shell scripts
  2. Dockerfile includes dos2unix conversion step

Solution: Simply rebuild the container:

docker compose build asset-processor
docker compose up -d asset-processor

For existing checkouts, optionally normalize line endings:

git rm --cached -r .
git reset --hard HEAD

No Logs / Application Not Running

If container starts but produces no logs and Node.js isn't running:

  • Container uses custom docker-entrypoint.sh script
  • Starts Xvfb in background before Node.js
  • Ensures proper log forwarding to Docker stdout

Verify:

# Check if node is running
docker compose exec asset-processor sh -c 'pidof node'

# View startup sequence
docker compose logs asset-processor | head -20

Expected logs:

info: Starting Modelibr Asset Processor Service
info: Configuration validated successfully
info: Health server started
info: Starting SignalR-based job processor

Connection Issues

Cannot Connect to API

Check API is accessible:

# Direct test
curl http://localhost:5009/health

# From Docker container
docker compose exec asset-processor curl http://webapi:8080/health

Common fixes:

# Wrong URL - include protocol
API_BASE_URL=http://localhost:5009 # ✓
# NOT: API_BASE_URL=localhost:5009 # ✗

# Docker networking - use service name
API_BASE_URL=http://webapi:8080

# Worker on host, API in Docker
API_BASE_URL=http://host.docker.internal:5009

SignalR Connection Failed

Enable debug logging:

LOG_LEVEL=debug

Verify SignalR hub endpoint:

curl http://localhost:5009/jobProcessingHub
# Should return connection upgrade message or 404

Processing Failures

"Failed to create WebGL context with headless-gl"

Requires two fixes:

  1. Mesa OpenGL libraries - Install runtime libraries
  2. Xvfb startup - Ensure Xvfb is ready before app starts

The latest Docker image includes both fixes. Rebuild:

docker compose build asset-processor

Verify:

# Check Xvfb is running
docker compose exec asset-processor sh -c 'pidof Xvfb'

# Verify DISPLAY is set
docker compose exec asset-processor sh -c 'echo $DISPLAY' # Should show :99

# Check Mesa libraries
docker compose exec asset-processor dpkg -l | grep -E 'libgl1|mesa'

"Failed to load model: Invalid file format"

Check supported formats: .obj, .fbx, .gltf, .glb, .blend (auto-converted to .glb)

Test with a simple .obj file:

curl -F "file=@test-model.obj" http://localhost:5009/models

"Frame encoding failed"

Verify FFmpeg:

ffmpeg -version
which ffmpeg

Install if missing:

# Ubuntu/Debian
sudo apt-get install ffmpeg

# macOS
brew install ffmpeg

Performance Issues

Slow Processing

Optimize settings:

# Reduce dimensions
RENDER_WIDTH=256
RENDER_HEIGHT=256

# Fewer frames (30° steps = 12 frames)
ORBIT_ANGLE_STEP=30

# Lower quality for faster encoding
WEBP_QUALITY=65
JPEG_QUALITY=75

High Memory Usage

Reduce workload:

# Lower concurrency
MAX_CONCURRENT_JOBS=2

# Enable cleanup
CLEANUP_TEMP_FILES=true

# Fewer frames
ORBIT_ANGLE_STEP=30

Docker memory limit:

# docker-compose.yml
services:
asset-processor:
deploy:
resources:
limits:
memory: 4G # Increase from 2G

Queue Backlog

Scale workers:

docker compose up -d --scale asset-processor=5

Or increase per-worker capacity:

MAX_CONCURRENT_JOBS=5

Disk Space Issues

"ENOSPC: no space left on device"

Clean up:

# Remove temp files
rm -rf /tmp/modelibr-worker/*
rm -rf /tmp/modelibr-frame-encoder/*

# Docker cleanup
docker system prune -a --volumes

Enable auto-cleanup:

CLEANUP_TEMP_FILES=true

Debugging Tools

Enable Debug Logging

export LOG_LEVEL=debug
npm start

Preserve Temporary Files

export CLEANUP_TEMP_FILES=false
npm start

# Inspect files:
# /tmp/modelibr-worker/downloads/
# /tmp/modelibr-frame-encoder/job-*/

Monitor Health

# Basic health
curl http://localhost:3001/health | jq

# Detailed status
curl http://localhost:3001/status | jq

# Watch status
watch -n 5 'curl -s http://localhost:3001/status | jq ".worker,.system.memory"'

Test Components

# Run unit tests
npm test

# Test SignalR (use browser developer tools)
# Connect to ws://localhost:5009/jobProcessingHub

Architecture

Technology Stack

  • Runtime - Node.js 18+ with Express for health endpoints
  • 3D Rendering - Puppeteer with Three.js for browser-based rendering
  • Image Processing - Sharp for frame processing and poster generation
  • Animation Encoding - node-webpmux for animated WebP creation
  • Communication - @microsoft/signalr for real-time notifications, axios for REST
  • Logging - Winston with structured JSON logging

Core Components

index.js - Main application entry, starts SignalR processor and health server

config.js - Centralized configuration from environment variables with validation

signalrQueueService.js - SignalR connection management and job notifications (connects to /jobProcessingHub)

jobApiClient.js - Job acquisition, status updates, and result reporting via HTTP (formerly thumbnailJobService.js)

jobProcessor.js - Orchestrates job processing; uses ProcessorRegistry for strategy-based dispatch

processors/ - Strategy pattern processors:

  • processorRegistry.js - Registers and resolves processors by job type
  • baseProcessor.js - Abstract base class defining the processor contract
  • meshProcessor.js - 3D mesh thumbnail generation (OBJ, FBX, GLTF, GLB)
  • soundProcessor.js - Audio waveform generation
  • textureSetProcessor.js - Texture set preview thumbnail generation (Universal/Global Materials). Renders a swing animation (camera swings from 45° top-left to 30° bottom-right and back) encoded as animated WebP via FrameEncoderService. Reads previewGeometryType from the texture set to render on the appropriate geometry (sphere, box, cylinder, or torus — defaults to plane). Uses uvScale directly as texture repeat multiplier. After generating the thumbnail, generates web proxy textures at the configured size (from settings, or overridden by job.proxySize). Also generates lightweight PNG previews for individual texture files (EXR or >1MB) and uploads them via POST /files/{id}/preview/upload.
  • textureProxyGenerator.js - Generates resized proxy textures per texture type category. Each function receives sourceChannel (0=RGB, 1=R, 2=G, 3=B, 4=A) and calls sharp.extractChannel() for packed/split-channel textures before resizing. Three encoding strategies: sRGB (Albedo, Emissive) → lossy WebP q80 for RGB, or lossless WebP for single-channel extraction; Linear (AO, Roughness, Metallic, Height, Displacement, Opacity, etc.) → lossless WebP, with extractChannel for packed maps or toColourspace('b-w') for full-RGB data; Normal → lossless PNG with per-pixel renormalization after resize (falls back to linear proxy for channel-extracted normals). Filenames include channel suffix (_R, _G, _B, _A) for split-channel proxies to avoid collisions.
  • thumbnailProcessor.js - Generic thumbnail processing; handles .blend → .glb conversion via headless Blender before Puppeteer rendering. After model loading (Step 3), extracts material names from the 3D model via puppeteerRenderer.extractMaterialNames() and saves them to the backend via modelDataService.saveMaterialNames() (PUT /model-versions/{versionId}/material-names). Material extraction is non-fatal — failures are logged but don't block thumbnail generation. Per-material texture application uses textureMappings from the job data filtered by mainVariantName. When mainVariantName is empty/null and no default variant mappings exist, falls back to the first available named variant's mappings. The model's fileType (after any .blend → .glb conversion) is forwarded to applyTextures() to ensure correct flipY behavior: GLTF/GLB/blend use flipY=false; OBJ/FBX use flipY=true — matching the frontend live preview.

imagePreviewGenerator.js - Utility for converting EXR files to PNG (via Three.js EXRLoader + Reinhard tone mapping + sharp) and resizing large standard images. Used by both puppeteerRenderer.js (pre-processing textures for browser) and textureSetProcessor.js (generating individual file previews).

modelFileService.js - Downloads and manages model files from backend API

modelDataService.js - Saves extracted model metadata (material names) to the backend API via authenticated endpoint (X-Api-Key header). Method: saveMaterialNames(modelVersionId, materialNames).

puppeteerRenderer.js - Browser-based 3D rendering with Puppeteer and Three.js (supports loadModel() for meshes, loadSphere() for sphere previews, and loadPrimitive(type) for arbitrary geometry types including box, cylinder, torus). Textures use RepeatWrapping for proper tiling. Accepts tilingScale parameter and applies texture.repeat.set(tiling.x, tiling.y) to all loaded textures for accurate tiling in generated thumbnails. Pre-processes EXR textures (converts to PNG via imagePreviewGenerator.js) and resizes large images before sending to the browser. Supports all texture types: Albedo, Normal, Height, AO, Roughness, Metallic, Emissive, Bump, Alpha, and Displacement. Includes extractMaterialNames() method that traverses the loaded Three.js scene to collect unique material names from meshes. applyTextures() accepts an optional materialName parameter to apply textures only to meshes whose material name matches (per-material texture application).

frameEncoderService.js - Encodes frames to animated WebP and poster with Sharp/node-webpmux

thumbnailStorageService.js - Uploads generated thumbnails to backend API for models

textureSetApiService.js - Uploads generated thumbnails to backend API for texture sets and individual file previews via uploadFilePreview()

healthServer.js - HTTP server for /health and /status endpoints

jobEventService.js - Sends detailed job event logs to the backend for audit trails

Job Processing Flow

  1. Notification - SignalR hub broadcasts job available
  2. Acquisition - Worker claims job via HTTP (first-come-first-served)
  3. Dispatch - ProcessorRegistry routes to appropriate processor based on asset type:
    • Model → MeshProcessor (3D model orbit animation thumbnail)
    • Sound → SoundProcessor (waveform generation)
    • TextureSet → TextureSetProcessor (animated swing preview on configured geometry)
  4. Download - Fetch asset files from API
  5. Load - Parse asset (3D model with Three.js / audio with FFmpeg / sphere geometry for textures)
  6. Render - Generate orbit frames at configured angles (models) or swing animation frames (texture sets)
  7. Encode - Create animated WebP + poster image for both models and texture sets
  8. Upload - Send thumbnail to API via multipart form
  9. Report - Update job status (completed/failed)

Environment Variables Reference

VariableDefaultDescription
Server
WORKER_IDworker-1Unique worker identifier
WORKER_PORT3001Health server port
API Connection
API_BASE_URLhttp://localhost:5009Backend API URL
WORKER_API_KEY(empty)API key for authenticated upload endpoints (X-Api-Key header)
NODE_TLS_REJECT_UNAUTHORIZED-Set to 0 to accept self-signed certs (dev only)
Processing
MAX_CONCURRENT_JOBS3Concurrent jobs per worker
JOB_POLLING_INTERVAL_MS5000Fallback polling interval
Rendering
RENDER_WIDTH256Thumbnail width in pixels
RENDER_HEIGHT256Thumbnail height in pixels
RENDER_FORMATpngFrame format (png, jpeg)
BACKGROUND_COLOR0xf0f0f0Background color (hex)
CAMERA_DISTANCE_MULTIPLIER2.5Camera distance from model
Orbit Animation
ENABLE_ORBIT_ANIMATIONtrueGenerate orbit animation
ORBIT_ANGLE_STEP12Degrees per frame (360/step = ~30 frames)
CAMERA_HEIGHT_MULTIPLIER0.75Camera height relative to distance
Encoding
ENABLE_FRAME_ENCODINGtrueEncode to animated WebP
ENCODING_FRAMERATE10Animation framerate (fps)
WEBP_QUALITY75WebP quality (0-100)
JPEG_QUALITY85JPEG quality for poster (0-100)
Storage
THUMBNAIL_STORAGE_ENABLEDtrueUpload to API
THUMBNAIL_STORAGE_PATH/tmp/modelibr-thumbnailsLocal temp storage
SKIP_DUPLICATE_THUMBNAILStrueSkip if hash exists
Logging
LOG_LEVELinfoLogging level (debug, info, warn, error)
LOG_FORMATprettyFormat (pretty, json)
Cleanup
CLEANUP_TEMP_FILEStrueDelete temp files after processing
Health
HEALTH_CHECK_ENABLEDtrueEnable health endpoint
HEALTH_CHECK_PATH/healthHealth check path

Getting Help

Diagnostic Bundle

Collect diagnostic information:

mkdir -p /tmp/worker-diagnostics
cd /tmp/worker-diagnostics

# Collect logs
docker logs asset-processor > worker.log 2>&1

# Configuration
env | grep -E "(WORKER|API|RENDER)" > config.txt

# Status
curl -s http://localhost:3001/status > status.json

# System info
uname -a > system.txt
node --version >> system.txt
ffmpeg -version >> system.txt

# Create archive
tar -czf ../worker-diagnostics.tar.gz .

Common Error Messages

ErrorMeaningSolution
ECONNREFUSEDCannot connect to APIVerify API URL and API is running
ENOSPCNo disk spaceClean temp files, increase disk
EADDRINUSEPort in useChange WORKER_PORT or stop conflicting service
MODULE_NOT_FOUNDMissing dependencyRun npm install
exec: no such fileLine ending issueRebuild container with docker compose build

Report Issues

Include:

  1. Description of problem
  2. Steps to reproduce
  3. Environment (OS, Node.js version, Docker version)
  4. Configuration (relevant .env values)
  5. Logs with LOG_LEVEL=debug
  6. Diagnostic bundle
  • Asset Processor README: src/asset-processor/README.md - Quick reference
  • Backend API: docs/docs/ai-documentation/BACKEND_API.md - API reference
  • Project README: README.md - Full application setup