The inventory number everyone quotes is wrong
Ask a mid-market retailer how much of a given SKU they have in stock and you will get a number. That number is almost certainly wrong, not because anyone is lying, but because the real answer lives in three different systems that do not agree with each other.
The primary distribution center says 340 units. The West Coast 3PL says 128. The East Coast 3PL says 95. But the DC’s number includes units in receiving that have not been put away. The West Coast 3PL’s report is from six hours ago. The East Coast 3PL counts “available” differently: their number includes units allocated to open orders that have not shipped yet.
The sum of these numbers is 563. The actual available-to-sell quantity is something else entirely. Getting to the real number requires understanding not just the quantities, but the semantics behind each partner’s data.
Format normalization is the first wall
Before you can reconcile inventory across fulfillment partners, you have to read what they sent you. This is harder than it should be.
One 3PL sends a CSV with columns: SKU, LOC, QTY_OH, QTY_ALLOC, QTY_AVAIL. Another sends an XML feed with nested elements: <item sku="..."><on_hand>...</on_hand><committed>...</committed></item>. A third sends a flat file with fixed-width columns and a header row that changes every time they upgrade their WMS.
The field names are different. The structures are different. The identifiers may not even match. One partner uses your SKU, another uses their internal item code with a crosswalk table they emailed you six months ago, a third uses UPC codes but strips the leading zero.
Each of these feeds needs to be parsed into a common structure before any reconciliation can happen. That parsing logic is typically embedded in custom scripts that someone wrote when the 3PL relationship started, maintained by whoever inherits the codebase, and fragile in ways that only surface when the 3PL changes something.
Timing creates a second layer of ambiguity
Even if every partner reported in the same format with the same field definitions, the data would still not align because of timing.
3PL A sends inventory snapshots every hour. 3PL B sends them twice daily. The DC’s WMS has a real-time API, but the polling job runs every 15 minutes. Between those intervals, orders are being picked, shipments are arriving, transfers are in transit.
Combining a 9:00 AM snapshot from one partner with a 6:00 AM snapshot from another and a 9:12 AM API pull from a third produces a number that was never true at any single point in time. It is a composite that approximates reality, and the gap between the approximation and the truth determines whether you oversell, undersell, or misallocate.
Handling this requires more than just pulling data together. It requires understanding the freshness of each source, the latency of each feed, and the business rules for what to do when sources disagree. If the 3PL says 100 units available but their data is six hours stale and your sales velocity for that SKU is 20 units per hour, the real availability is closer to zero.
What “available” means depends on who you ask
The most subtle problem is semantic. Every fulfillment partner defines inventory status slightly differently.
On hand might mean physical units in the warehouse, or it might mean units in the warehouse minus damaged stock, or it might include units in receiving that have not been inspected yet. The definition is embedded in the 3PL’s WMS configuration, and it may not match the definition in their documentation.
Available might mean on hand minus allocated, or on hand minus allocated minus safety stock, or on hand minus committed to open orders. Some 3PLs distinguish between soft allocations (reserved for pending orders) and hard allocations (assigned to specific shipments). Others do not.
In transit might mean units shipped from a supplier to the 3PL, or units being transferred between 3PL locations, or both. Some partners report inbound shipments as part of their inventory feed. Others do not report them at all until receiving is complete.
Reconciling these fields requires a mapping layer that understands not just “this column maps to that column” but “this column from Partner A minus this other column from Partner A equals the equivalent of that column from Partner B.”
Building a unified inventory view
The goal is a single, reconciled inventory record per SKU that reflects actual availability across all fulfillment locations. Getting there requires solving three problems in sequence: normalize the formats, align the semantics, and handle the timing.
Format normalization means accepting each partner’s data in whatever structure they provide and mapping it to a common inventory schema. Whether the feed arrives as CSV, XML, JSON, or a fixed-width flat file, the output is the same set of fields: SKU, location, on hand, allocated, available, as-of timestamp.
Semantic alignment means defining what “available” means in your system and mapping each partner’s fields to that definition. If Partner A reports QTY_OH and QTY_ALLOC separately, availability is the difference. If Partner B reports only QTY_AVAIL but their definition includes safety stock as available, a transformation subtracts the safety stock threshold.
Multi-source joins connect inventory records across partners to the same canonical SKU. This is where identifier crosswalks, UPC lookups, and supplier item code mappings come in. A join on SKU alone works when all partners use your identifiers. When they do not, the join condition becomes a mapping unto itself.
datathere handles this as a multi-source integration with quality enforcement. Each 3PL and DC feed is a source. The unified inventory view is the destination schema. AI-generated mappings handle the field-level translation (QTY_OH to on_hand, LOC to location_code), and transformation expressions handle the semantic conversions. Multi-source joins connect records across partners using whatever identifier each partner provides.
Quality enforcement rules catch the problems that manual reconciliation misses. A negative available quantity gets flagged. A location code that does not exist in the master location list gets quarantined. A quantity that changed by more than 50% since the last feed triggers a review before the consolidated view updates.
The downstream consequences of getting this right
Accurate, reconciled inventory feeds everything downstream. Available-to-promise calculations depend on it. Channel-specific stock allocation depends on it. Replenishment triggers depend on it. Every marketplace listing that shows “In Stock” or “Only 3 Left” depends on it.
When the reconciliation is wrong, the failures are visible and expensive. Overselling generates customer complaints, cancellations, and marketplace penalties. Underselling — showing out-of-stock when units exist at a different location — is lost revenue that never appears on a report.
The retailers that solve this problem do not solve it by making their 3PLs standardize. They solve it by building (or adopting) a layer that accepts the variation, normalizes it, and produces a single reliable view. The complexity does not disappear. It gets managed in one place instead of leaking into every system that consumes inventory data.