walkerOS CLI
The walkerOS CLI (@walkeros/cli) is a command-line tool for building, testing, and running event collection flows. It handles the complete workflow from configuration to deployment. It bundles flows into optimized JavaScript, testing with simulated events, and running collection servers locally or in Docker.
Installation
Global Installation (Recommended)
Install globally to use the walkeros command anywhere:
Local Installation
Install in your project for team consistency:
Commands overview
| Command | Purpose | Use Case |
|---|---|---|
bundle | Build production-ready bundle from flow config | Create deployable JavaScript from configuration |
push | Execute event with real API calls (or --simulate for mocked) | Testing and production validation |
validate | Validate events, flows, mappings, contracts, or entries | Check configuration before bundling |
run | Start HTTP event collection server | Accept incoming events via HTTP POST |
cache | Manage CLI package and build caches | Clear stale caches, view cache statistics |
auth | Authentication and identity | Log in, log out, check identity |
projects | Manage walkerOS projects | Create, list, update, delete projects |
flows | Manage walkerOS flows | Create, list, update, delete, duplicate flows |
deploy | Create and manage deployments | Deploy flows to walkerOS cloud or self-hosted |
The push command accepts either a config JSON or a pre-built bundle as input.
Configuration types
The CLI uses types from @walkeros/core:
Flow.Config- Root config file format (version,flows)Flow.Settings- Single flow (hasweb/server,sources,destinations)Collector.InitConfig- Runtime type passed tostartFlow()
The CLI transforms Flow.Config → Flow.Settings (per flow) → bundled code that uses Collector.InitConfig at runtime.
The bundle section
Build-time concerns for the bundler live under flow.<name>.bundle. Two fields are supported:
packages— the list of npm packages (or local paths) the bundler should install for this flow.overrides— pin transitive dependency versions, matching npm'soverridessemantics. Useful when an upstream package declares an overly narrow version range that conflicts with a newer transitive version in the same tree.
Package imports
Implicit collector
The collector is added automatically. To pin a specific version (recommended for production), add the collector explicitly:
Default exports (sources & destinations)
Sources and destinations automatically use their default export - no imports needed:
Utility imports
Use imports when you need specific utility functions from a package:
Explicit code (advanced)
For packages without a default export, specify both imports and code:
Local packages
By default, the CLI downloads packages from npm. For development or testing unpublished packages, you can use local packages instead by specifying a path property.
Configuration
Add a path property to any package to use a local directory instead of npm:
Resolution Rules
pathtakes precedence - When bothpathandversionare specified,pathis used- Relative paths - Resolved relative to the config file's directory
- Absolute paths - Used as-is
- dist folder - If a
dist/folder exists, it's used; otherwise the package root is used
Dependency Resolution
When a local package has dependencies on other packages that are also specified with local paths, the CLI will use the local versions for those dependencies too. This prevents npm versions from overwriting your local packages.
In this example, even though @walkeros/collector depends on @walkeros/core, the local version of core will be used (not downloaded from npm). This is essential when testing changes across multiple interdependent packages.
Use Cases
Development of custom packages:
Testing local changes to walkerOS packages:
When ready for production, simply remove the path property to use the published npm version.
Getting started
Before using the CLI, you need a flow configuration file. Here's a minimal example:
The @walkeros/collector package is automatically added when your flow has sources or destinations.
Save this as flow.json.
Bundle command
The bundle command builds production-ready JavaScript bundles from flow configurations.
Use Case
You've defined your sources, destinations, and transformations in a flow configuration file. Now you need to:
- Download the required npm packages
- Bundle everything into a single optimized JavaScript file
- Deploy it to production (Docker, Cloud Run, serverless functions)
The bundle command handles all of this.
Basic Usage
This writes the optimized bundle to stdout. Use -o to write to a file (e.g., -o ./dist/walker.js for web, -o ./dist/bundle.mjs for server).
Step-by-Step Guide
1. Create a flow configuration
Create server-collect.json:
2. Bundle the flow
Output:
📦 Downloading packages from npm...
✓ @walkeros/collector@latest
✓ @walkeros/server-source-express@latest
✓ @walkeros/destination-demo@latest
🔨 Bundling...
✓ Bundle created: ./dist/bundle.mjs
📊 Bundle Statistics:
Size: 45.2 KB (minified)
Packages: 3
Format: ESM
3. Review the bundle
The bundle is now ready to deploy!
Options
| Option | Description |
|---|---|
-o, --output <path> | Write bundle to file, directory, or presigned URL |
-f, --flow <name> | Build specific flow (for multi-flow configs) |
--all | Build all flows |
--dockerfile | Generate Dockerfile alongside bundle |
-s, --stats | Show bundle statistics |
--json | Output statistics as JSON (for CI/CD) |
--no-cache | Skip package cache, download fresh |
-v, --verbose | Detailed logging |
When --output is a URL (e.g., a presigned S3 URL), the CLI bundles to a temp file and uploads via HTTP PUT. This is used by the walkerOS cloud service for remote builds.
Multi-Flow Example
Push command
The push command executes your flow with a real event. By default it makes actual API calls to your configured destinations. Use --simulate to mock specific steps for safe testing. It accepts either a config JSON (which gets bundled) or a pre-built bundle.
Use Case
You want to:
- Test event processing with mocked destinations (
--simulate) - Test with real third-party APIs (GA4, Meta, BigQuery, etc.)
- Verify production credentials and endpoints work
- Debug actual API responses and errors
- Perform integration testing before deployment
- Execute a pre-built bundle without rebuilding
Push handles both safe local testing (with --simulate) and real integration testing.
Basic Usage
Step-by-Step Guide
1. Create a flow configuration
Create api-flow.json with a destination that makes real HTTP calls:
2. Create an event file
Create event.json:
3. Push the event
4. Review the output
📥 Loading event...
📦 Loading flow configuration...
🔨 Bundling flow configuration...
🖥️ Executing in server environment (Node.js)...
Pushing event: order complete
✅ Event pushed successfully
Event ID: 1701234567890-abc12-1
Entity: order
Action: complete
Duration: 1234ms
The event was sent to your real API endpoint!
Options
| Option | Description |
|---|---|
-e, --event <source> | Required. Event to push (JSON string, file path, or URL) |
-f, --flow <name> | Flow name (for multi-flow configs) |
-p, --platform <platform> | Platform override (web or server) |
--simulate <step> | Simulate a step (repeatable). Mocks the step's push, captures result. Use destination.NAME or source.NAME. |
--mock <step=value> | Mock a step with a specific return value (repeatable). Use destination.NAME=VALUE. |
--snapshot <source> | JS file to eval before execution. Sets global state (window.dataLayer, process.env, etc.). |
-o, --output <path> | Write result to file |
--json | Output results as JSON |
-v, --verbose | Verbose output with debug information |
-s, --silent | Suppress output (for CI/CD) |
Input Types
The CLI auto-detects the input type by attempting to parse as JSON:
- Config JSON - Bundled and executed
- Pre-built bundle (
.js/.mjs) - Executed directly
When using pre-built bundles, platform is detected from file extension:
.mjs→ server (ESM, Node.js).js→ web (IIFE, JSDOM)
Use --platform to override if extension doesn't match intended runtime.
Event Input Formats
The --event parameter accepts three formats:
Inline JSON string:
File path:
URL:
Push modes
| Mode | Flag | API Calls | Use Case |
|---|---|---|---|
| Real | (none) | Real HTTP requests | Integration testing, production validation |
| Simulate | --simulate | Mocked (captured) | Safe local testing |
| Mock | --mock | Returns mock value | Controlled testing |
Recommended workflow:
- Use
push --simulatefirst to validate configuration without side effects - Use
push(without flags) to verify real integrations work before deployment
JSON Output
For CI/CD pipelines, use --json for machine-readable output:
Output:
{
"success": true,
"event": {
"id": "1701234567890-abc12-1",
"name": "page view",
"entity": "page",
"action": "view"
},
"duration": 1234
}
On error:
{
"success": false,
"error": "Connection refused: https://your-endpoint.com/events",
"duration": 5023
}
Multi-Flow
Push to specific flows:
Validate command
The validate command checks the structure and correctness of events, flow configurations, mapping configurations, contracts, or individual flow entries before bundling or deployment.
Use case
Before bundling or deploying your flow, you want to:
- Catch configuration errors early
- Verify event structure follows walkerOS conventions
- Check mapping patterns are valid
- Validate contracts define proper entity-action schemas
- Validate a specific destination, source, or transformer entry against its package's published JSON Schema
- Integrate validation into CI/CD pipelines
Validate gives you fast feedback without the overhead of bundling.
Basic usage
Validation types
The --type option accepts four validation types. Default is flow:
| Type | What it validates |
|---|---|
event | Event structure: name field exists, follows "entity action" format with space, valid data types |
flow (default) | Flow.Config config: schema, references, packages, and cross-step example compatibility |
mapping | Mapping rules: event patterns use "entity action" or wildcard format, rule structures are valid |
contract | Contract structure: entity-action entries map to JSON Schema objects, optional $tagging metadata |
Use --path for entry validation against package schemas (e.g., --path destinations.snowplow).
Step-by-step guide
1. Validate a flow configuration
Output (valid):
Validating flow...
Validation Results:
✓ All checks passed
Summary: 0 error(s), 0 warning(s)
2. Validate with errors
Output:
Validating event...
Validation Results:
✗ name: Event name must be "entity action" format with space (e.g., "page view")
Summary: 1 error(s), 0 warning(s)
3. Validate with warnings
Output:
Validating event...
Validation Results:
✓ All checks passed
⚠ consent: No consent object provided
→ Consider adding a consent object for GDPR/privacy compliance
Summary: 0 error(s), 1 warning(s)
Options
| Option | Description |
|---|---|
-t, --type <type> | Validation type (default: flow). Also: event, mapping, contract |
--path <path> | Validate a specific entry against its package schema (e.g., destinations.snowplow) |
-f, --flow <name> | Flow name to validate (for multi-flow configs) |
-o, --output <path> | Write result to file |
--strict | Treat warnings as errors (exit code 2) |
--json | Output results as JSON |
-v, --verbose | Show detailed validation information |
-s, --silent | Suppress banner output |
Exit codes
| Code | Meaning |
|---|---|
| 0 | Valid (no errors) |
| 1 | Validation errors found |
| 2 | Warnings found (with --strict) |
| 3 | Input error (file not found, invalid JSON, etc.) |
CI/CD integration
Use --json and exit codes for automated pipelines:
JSON output format:
{
"valid": true,
"type": "flow",
"errors": [],
"warnings": [
{
"path": "packages.@walkeros/destination-demo",
"message": "Package \"@walkeros/destination-demo\" has no version specified",
"suggestion": "Consider specifying a version for reproducible builds"
}
],
"details": {
"flowNames": ["default"],
"flowCount": 1,
"packageCount": 2
}
}
Multi-flow validation
Validate a specific flow in a multi-flow configuration:
If the flow doesn't exist:
Validation Results:
✗ flows: Flow "production" not found. Available: default, staging
Summary: 1 error(s), 0 warning(s)
Event validation details
Event validation checks (see event.ts):
- Name field exists - Required field
- Name is non-empty - Cannot be empty string
- Entity-action format - Must contain space (e.g.,
"page view"not"pageview") - Schema validation - Data types match expected structure
- Best practices - Warns if consent object is missing
Mapping validation details
Mapping validation checks (see mapping.ts):
- Object structure - Must be an object with event patterns as keys
- Event patterns - Must be
"entity action"format or contain wildcard (*) - Rule structure - Each rule must be an object or array of objects
- Catch-all position - Warns if
*is not the last pattern
Contract validation details
Contract validation checks (see contract.ts):
- Root structure - Must be an object (not array or primitive)
$taggingmetadata - If present, must be a non-negative integer- Entity keys - Cannot be empty strings
- Action entries - Each entity must contain an object of action entries
- Schema entries - Each action value must be a JSON Schema object
Entry validation (--path)
Entry validation checks a specific destination, source, or transformer in your flow config against the package's published JSON Schema. The CLI fetches the schema from the package on the CDN and validates the config.settings object using AJV.
The entry validator:
- Resolves the entry from the flow config (first flow is used)
- Reads the
packagefield to identify the npm package - Fetches the package's JSON Schema from the CDN
- Validates the entry's
config.settingsagainst the schema
Auth commands
The auth command group manages authentication with the walkerOS cloud service. Authentication is required for cloud commands (projects, flows, deploy).
Login
Log in to walkerOS via an OAuth browser flow. The CLI requests a device code, opens your browser for authorization, and polls for the resulting token.
Output:
! Your one-time code: ABCD-1234
Authorize here: https://app.walkeros.io/auth/device?code=ABCD-1234
Opening browser...
Waiting for authorization... (press Ctrl+C to cancel)
✓ Logged in as user@example.com
Token stored in ~/.config/walkeros/config.json
The token is stored in ~/.config/walkeros/config.json with 0600 permissions. You can also set the WALKEROS_TOKEN environment variable instead of using auth login.
| Option | Description |
|---|---|
--url <url> | Custom app URL (default: https://app.walkeros.io) |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Logout
Remove stored credentials from disk.
| Option | Description |
|---|---|
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Whoami
Show the current authenticated user's identity, including email, user ID, and project ID (if the token is project-scoped).
Output:
user@example.com
User: usr_abc123
Project: proj_def456
| Option | Description |
|---|---|
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Token resolution
The CLI resolves authentication tokens in this order:
WALKEROS_TOKENenvironment variable- Config file (
~/.config/walkeros/config.json, written byauth login) - Not authenticated (cloud commands will fail)
Projects commands
The projects command group manages walkerOS cloud projects. All commands require authentication (see Auth commands).
List projects
Get project details
Create a project
Update a project
Delete a project
Common options
All projects subcommands support:
| Option | Description |
|---|---|
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Flows commands
The flows command group manages flow configurations within a project. All commands require authentication and a project context (either --project or WALKEROS_PROJECT_ID).
List flows
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
--sort <field> | Sort by: name, updated_at, created_at |
--order <dir> | Sort order: asc, desc |
--include-deleted | Include soft-deleted flows |
Get a flow
Retrieves a flow with its full Flow.Config JSON content.
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
Create a flow
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
-c, --content <json> | Flow.Config JSON string or file path |
Update a flow
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
--name <name> | New flow name |
-c, --content <json> | New Flow.Config JSON string or file path |
Delete a flow
Soft-deletes a flow configuration.
Duplicate a flow
Creates a copy of an existing flow configuration.
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
--name <name> | Name for the copy (defaults to "Copy of ...") |
Common options
All flows subcommands support:
| Option | Description |
|---|---|
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Deploy commands
The deploy command group handles deploying flows to the walkerOS cloud or managing self-hosted deployments with heartbeat registration. All commands require authentication.
deploy create
Create a new deployment. The CLI infers the deployment type (web or server) from the flow configuration.
On success, the CLI displays the deployment ID, slug, type, and a one-time deploy token. It also shows example commands for running the deployment locally or via Docker.
| Option | Description |
|---|---|
--label <string> | Human-readable label for the deployment |
-f, --flow <name> | Flow name for multi-flow configs |
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
deploy start
Deploy a remote flow to walkerOS cloud infrastructure. Auto-detects whether to use web (script hosting) or server (container) deployment based on the flow content. Streams deployment progress via SSE.
Output (web deployment):
Building bundle...
Publishing to web...
✓ Published: https://cdn.walkeros.io/proj_xxx/walker.js
Output (server deployment):
Building bundle...
Deploying container...
Starting container...
✓ Active: https://collect-abc123.walkeros.io
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
-f, --flow <name> | Flow name for multi-config flows |
--no-wait | Return immediately after triggering (do not stream progress) |
--timeout <seconds> | Timeout for deployment polling (default: 120) |
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
deploy list
List all deployments in a project.
| Option | Description |
|---|---|
--project <id> | Project ID (defaults to WALKEROS_PROJECT_ID) |
--type <type> | Filter by type: web, server |
--status <status> | Filter by status |
deploy status
Get deployment details by ID or slug.
| Option | Description |
|---|---|
--project <id> | Project ID |
deploy delete
Delete a deployment by ID or slug.
| Option | Description |
|---|---|
--project <id> | Project ID |
--json | Output as JSON |
Common options
All deploy subcommands support:
| Option | Description |
|---|---|
-o, --output <path> | Write result to file |
--json | Output as JSON |
-v, --verbose | Verbose output |
-s, --silent | Suppress output |
Run command
The run command starts an HTTP server that accepts events and processes them through your flow.
Use Case
You need an HTTP endpoint to:
- Receive events from browser clients, mobile apps, or server-side sources
- Process events through your collector and destinations
- Test the full event pipeline locally before deploying to production
This is similar to running a Segment or Jitsu collection endpoint.
Basic Usage
Step-by-Step Guide
1. Create a collection flow
Create collect.json:
2. Start the collector
Output:
📦 Bundling flow...
✓ Bundle ready
🚀 Starting collection server...
✓ Server running on http://localhost:8080
✓ Endpoint: POST http://localhost:8080/collect
✓ Health check: GET http://localhost:8080/health
3. Send test events
Open a new terminal and send events:
4. See events in console
The collector terminal shows:
[Event Collector] page view
data: {"title":"Home Page","path":"/"}
user.id: user123
timestamp: 1701234567890
[Event Collector] product view
data: {"id":"P123","name":"Laptop","price":999}
timestamp: 1701234567891
Options
| Option | Description |
|---|---|
-p, --port <number> | Server port (default: 8080) |
-h, --host <address> | Host address (default: 0.0.0.0) |
--deploy <id-or-slug> | Deployment ID or slug (enables heartbeat to walkerOS cloud) |
--project <id> | Project ID (used with --deploy) |
--url <url> | Public URL of this server (used with --deploy) |
--health-endpoint <path> | Health check path (default: /health) |
--heartbeat-interval <seconds> | Heartbeat interval in seconds (default: 60) |
--json | Output as JSON |
-v, --verbose | Detailed logging |
-s, --silent | Suppress output |
Heartbeat registration
When --deploy is provided, the collector registers itself with the walkerOS cloud via periodic heartbeats. This makes the running instance visible in the project dashboard and enables remote management.
The heartbeat sends instance ID, uptime, CLI version, and mode. The server can respond with update (triggering a bundle refresh) or stop (graceful shutdown).
Running Pre-Built Bundles
You can also run pre-built bundles directly:
Complete example: Web → Server flow
This example demonstrates a complete analytics pipeline:
- Browser events captured by web flow
- Sent to server collection endpoint
- Logged to console (swap for BigQuery in production)
1. Create Server Collection Flow
Create server-collect.json:
2. Create Web Tracking Flow
Create web-track.json:
3. Start Collection Server
Terminal 1:
4. Start Web Server
Terminal 2:
5. Test in Browser
Create demo.html:
Open in browser. Terminal 1 shows:
[Server Logger] page view
[Server Logger] promotion view
[Server Logger] promotion cta
[Server Logger] custom event
Cache command
The cache command manages the CLI's package and build caches.
Use Case
The CLI caches downloaded npm packages and compiled builds to speed up repeated operations. You may need to:
- Clear stale cached packages when debugging version issues
- Free up disk space by removing old cached builds
- View cache statistics to understand cache usage
How Caching Works
Package Cache (.tmp/cache/packages/):
- Mutable versions (
latest,^,~) are re-checked daily - Exact versions (
0.4.1) are cached indefinitely - Saves network time on repeated builds
Build Cache (.tmp/cache/builds/):
- Caches compiled bundles based on flow.json content + date
- Identical configs reuse cached builds within the same day
- Dramatically speeds up repeated builds (~100x faster)
Basic Usage
Commands
View cache info:
Output:
Cache directory: .tmp/cache
Cached packages: 12
Cached builds: 5
Clear all caches:
Clear only package cache:
Clear only build cache:
Bypassing Cache
To skip the cache for a single build operation:
This downloads fresh packages and rebuilds without using or updating the cache.
Options
| Option | Description |
|---|---|
--packages | Clear only the package cache |
--builds | Clear only the build cache |
Global options
These options work with all commands:
| Option | Description |
|---|---|
--verbose | Show detailed logs |
--silent | Suppress output |
--json | Output as JSON (for CI/CD) |
--help | Show help for command |
--version | Show CLI version |
Environment variables
| Variable | Purpose |
|---|---|
WALKEROS_TOKEN | API token for cloud commands (overrides auth login config) |
WALKEROS_PROJECT_ID | Default project ID for projects, flows, and deploy commands |
WALKEROS_APP_URL | Base URL override (default: https://app.walkeros.io) |
WALKEROS_DEPLOY_TOKEN | Deploy token for container heartbeat authentication |
CI/CD integration
GitHub Actions
Docker Build
Troubleshooting
Package Download Issues
If packages fail to download:
Build Issues
If you encounter build issues:
Port Already in Use
If the port is already in use:
Authentication Issues
If cloud commands fail with authentication errors:
Next steps
- Flow Configuration - Learn about flow config structure
- Docker Deployment - Deploy flows to production
- Runner - Self-hosted runner with config polling and hot-swap
- MCP Server - Use CLI tools from AI assistants
- Sources - Explore event sources
- Destinations - Configure analytics tools
- Mapping - Transform events for destinations
See Also
Using Integrated mode instead? The CLI uses JSON configuration (Bundled mode). If you prefer TypeScript and want the collector built into your application, see Integrated Mode and the Collector documentation.
Both approaches use the same underlying architecture. The difference is how you configure and deploy.