Architecture diagram showing data flow between Wialon and ERP systems with retry and reconciliation layers
8 min

Wialon Integration with ERP: Architecture and Pitfalls

Most Wialon to ERP projects fail for non-obvious reasons: weak data contracts, unclear ownership, and no reconciliation routine. This guide summarizes an architecture that stays stable under daily operational load.

Start with business events, not API endpoints

The most common mistake in Wialon-to-ERP integration is opening the Remote API documentation and building outward from endpoint catalogs. Teams inventory every available call — core/search_items, unit/get_messages, report/exec_report — and start pulling data before anyone defines what business question that data needs to answer. The result is a pipeline that syncs thousands of raw messages per hour and nobody in operations knows what to do with them.

Event-first design flips this. You start by naming the business events that matter: trip completed, geofence boundary crossed, engine idle threshold breached, fuel level anomaly detected, driver shift started. Each event has a clear consumer in the ERP — a transport order status update, a payroll trigger, a maintenance work order. Only after events are named and mapped to downstream actions do you select the Wialon API calls and message types required to detect them.

This approach eliminates an entire class of waste. Instead of pulling every unit message and filtering later, you subscribe to the specific notification types or poll the specific message intervals that produce your defined events. The extraction layer becomes smaller, the transformation logic becomes explicit, and the ERP receives records it can actually process. When stakeholders ask 'why did the ERP not update trip 4821?', you can trace from the business event definition back to the exact API call and message window.

Define field ownership before implementation

Every integration surface between Wialon and an ERP has shared fields: vehicle registration number, driver assignment, odometer reading, fuel tank capacity. When both systems can write to the same attribute, drift is not a risk — it is a certainty. Within weeks of going live, you will find vehicles with different odometer readings in each system, drivers assigned in Wialon but not in the ERP, and fuel capacity values that diverged after a manual edit on one side.

The fix is a field ownership matrix created before any code is written. For each shared attribute, one system is the master and the other is the consumer. Vehicle metadata (VIN, registration, class) typically belongs to the ERP because that is where procurement and compliance live. Real-time telemetry fields (GPS position, sensor readings, current driver via iButton) belong to Wialon because that is where hardware feeds. Odometer is trickier — Wialon calculates it from GPS or CAN bus, but the ERP may have a 'last verified odometer' from a maintenance inspection. You need a precedence rule: use Wialon's value for daily operations, but allow the ERP to override it during a verified maintenance event.

Late-arriving events create a second ownership problem. If a vehicle completes a trip at 23:50 but the message does not reach your pipeline until 01:15 the next day, which date owns the trip? If the ERP closes its daily batch at midnight, the trip either gets double-counted or lost entirely. Define timestamp precedence explicitly: event_time from Wialon is authoritative, ingestion_time is metadata. Build your ERP sync to reopen or amend prior periods when late events arrive, rather than silently dropping them.

  • Create a shared spreadsheet mapping every field to its owning system, update cadence, and conflict resolution rule.
  • Establish a 'last-writer-wins with audit trail' policy for fields that genuinely need bidirectional updates.
  • Define a grace period for late-arriving events — typically 4 to 6 hours — after which manual reconciliation is required.
  • Run weekly drift detection queries that compare key fields across systems and flag mismatches for review.

Design the data contract layer

Loosely typed JSON payloads passed between Wialon extraction and ERP ingestion are a ticking time bomb. The first version works fine because the developer who wrote the producer also wrote the consumer. Six months later, someone adds a field, changes a unit of measurement from liters to gallons, or renames 'driver_id' to 'operator_id'. The consumer breaks at 2 AM and nobody knows why until the morning shift notices missing data.

A data contract is a versioned schema definition that both the producer and consumer agree to. It specifies field names, types, units, nullability, and valid value ranges. For a trip completion event, the contract might define trip_id as a required string, distance_km as a required float with two decimal places, and driver_code as an optional string that must match a defined format. Any payload that violates the contract is rejected at ingestion, not silently absorbed.

Version the contract explicitly. Version 1.0 has distance in kilometers; version 1.1 adds fuel_consumed_liters as an optional field; version 2.0 changes driver_code from optional to required. Consumers declare which contract version they support. When a breaking change ships, run both versions in parallel during a migration window. This prevents the cascading failures that plague integrations where 'we just added a field and everything broke'.

Implement retries and reconciliation as first-class features

Production integrations fail regularly. Network timeouts, Wialon session token expiration mid-batch, ERP database locks during month-end processing, malformed records from a newly provisioned tracker — these are normal operating conditions, not edge cases. If your pipeline treats any failure as fatal and stops processing, you will have data gaps within the first week.

Every write operation must be idempotent. Use a deduplication key derived from the business event — for example, a composite of unit_id, event_type, and event_timestamp truncated to the second. When a retry delivers the same event twice, the consumer upserts rather than inserts. PostgreSQL's ON CONFLICT clause makes this straightforward. Without idempotency, retries create duplicate records that inflate trip counts, fuel totals, and every downstream report.

Retries need structure: exponential backoff with jitter, starting at 1 second, doubling to 2, 4, 8, then capping at 60 seconds. The jitter — a random offset of up to 30% of the delay — prevents thundering herd when multiple workers retry simultaneously after a shared outage. After a configurable number of retries (typically 5), route the failed record to a dead-letter queue for manual inspection rather than retrying indefinitely.

Nightly reconciliation is the safety net that catches everything retries miss. Run a comparison query that joins the source extraction log against the ERP target table on the deduplication key. Any record present in the extraction log but missing from the ERP is a gap. Any record present in both but with differing values is a drift. Publish this report to a shared channel every morning. If the gap count exceeds your threshold — we use 0.1% of daily volume — trigger an alert before the operations team starts their shift.

Handle authentication lifecycle

Wialon Remote API uses session tokens (the 'sid' parameter) obtained via the token/login endpoint. Each session has an inactivity timeout — typically 5 minutes on Wialon Hosting, configurable on Wialon Local. If your pipeline takes longer than this to process a batch without making an API call, the session expires silently. The next request returns error code 1 (invalid session), and if your code does not handle this specifically, it logs a generic 'request failed' and moves on, leaving a gap in your data.

The most common authentication mistake is hardcoding a single long-lived token shared across all workers. When that token is revoked — because an admin regenerates it, or because Wialon's token limit per user is reached — every worker fails simultaneously. Instead, implement a token pool: each worker obtains its own session via token/login using a shared API token, manages its own session lifecycle, and refreshes proactively before the inactivity timeout. A background heartbeat (calling avl_evts every 60 seconds) keeps the session alive during long processing gaps.

For multi-tenant deployments where you integrate with multiple Wialon accounts, isolate credentials per tenant in a secrets manager. Never store Wialon API tokens in environment variables or config files checked into version control. Rotate tokens on a quarterly schedule and during any security incident. Log every authentication event — login, refresh, expiration, failure — to a dedicated audit table so you can diagnose 'why did sync stop at 3 AM?' without guessing.

Plan for schema evolution

Both sides of the integration will change their schemas over time. Wialon adds new unit properties, changes report column names between versions, or deprecates message fields. The ERP team adds columns, changes foreign key relationships, or migrates to a new module. If your integration is a rigid point-to-point mapping, every change on either side requires a synchronized deployment — and synchronized deployments across teams that release on different schedules are a fiction.

Build versioned transformation layers between the raw Wialon extraction and the ERP-ready payload. Each transformation version is a pure function: given input schema version X, produce output schema version Y. When Wialon changes its output, you add a new input adapter without touching the existing one. When the ERP changes its input requirements, you add a new output adapter. The transformation registry maps input-output version pairs to the correct function. Old versions remain available for replay and debugging.

Use feature flags to roll out schema changes gradually. Deploy the new transformation version to production but activate it only for a subset of vehicles or a single business unit. Compare outputs between old and new versions for 48 hours. If the new version produces identical results for shared fields and correctly populates the new fields, promote it to 100%. If it diverges, you caught a bug before it affected the entire fleet. This approach eliminates the 'big bang migration' anxiety that causes teams to defer necessary updates for months.

Operational readiness checklist

An integration that works in development but has no operational scaffolding will fail in production within the first month. Before go-live, build a monitoring dashboard that shows four things at a glance: sync lag (time between event occurrence in Wialon and record arrival in ERP), error rate by category (authentication, validation, transformation, network), record throughput (events per minute, trending over 24 hours), and reconciliation gap count (updated nightly).

Define alerting rules with meaningful thresholds. Sync lag above 15 minutes triggers a warning; above 60 minutes triggers a critical page. Error rate above 1% of hourly volume triggers investigation. Reconciliation gap above 0.1% triggers a pre-shift review. Avoid alert fatigue by tuning thresholds during the first two weeks of production — start conservative and tighten as you learn the baseline.

Write runbooks for the three most common failure scenarios: Wialon session expiration (check token status, force re-authentication, verify backfill), ERP database lock (identify blocking query, wait or escalate to DBA, resume sync), and schema mismatch (identify changed field, deploy updated transformation, reprocess failed batch). Each runbook should be completable by an on-call engineer who did not build the integration. If it requires the original developer to fix, it is not a runbook — it is tribal knowledge, and it will fail when that developer is on vacation.

  • Assign on-call ownership for the integration pipeline — it should not default to the platform team without explicit agreement.
  • Run a game day exercise before go-live: simulate token expiration, network partition, and schema change to verify that alerting, retries, and runbooks work.
  • Maintain an incident log with root cause, resolution, and prevention action for every outage exceeding 30 minutes.

Related Guides

Get in Touch

Want to talk?

Fill out the form and we'll get back to you within 24 hours.

Who We Are

We know Wialon inside out

Our team came from Gurtam. We've been building production integrations for Wialon partners across six continents for over a decade.

10+
Years
Experience
50+
Integrations
Delivered
6
Continents
Served