The connector question
Integration platforms sell on connector counts. “500+ pre-built connectors.” “800 integrations out of the box.” “Universal connectivity.” The implicit promise is that a large enough catalog covers the integration work a company needs.
The promise holds for one class of problem and falls apart for another. Knowing which is which saves months of platform-selection work.
What a connector actually is
A connector is code that handles the authentication, API specifics, and data-model quirks of one particular system. A Salesforce connector knows how to authenticate to Salesforce, how to query its objects, how to handle its pagination, and how to translate between Salesforce field types and a generic representation.
Writing a connector for a popular SaaS app takes weeks. Maintaining it takes ongoing effort: the target API changes, new object types get added, rate limits shift, authentication mechanisms evolve. A vendor that maintains 500 connectors is running a significant engineering operation just to keep them working.
For the user of the integration platform, a working connector is invisible infrastructure. For the vendor, it is a cost that scales linearly with the catalog.
When universal connectors work
Pre-built connector catalogs cover the problem well in specific conditions.
The source is a documented SaaS application. If the target is Salesforce, HubSpot, Stripe, Zendesk, or similar major SaaS tools, the connector has been written, tested, and used by thousands of customers. The connector is reliable, current, and well-understood.
The data model is stable. If the SaaS vendor maintains backward compatibility on their API, the connector keeps working without intervention. Most major SaaS vendors do this reasonably well, though schema changes still happen.
The integration fits standard patterns. If the work is pulling records out of one SaaS app and pushing them into another, a connector-based platform handles the heavy lifting. The user configures a workflow; the connectors handle the plumbing.
The category that fits these conditions is SaaS-to-SaaS integration, and iPaaS platforms are purpose-built for it.
When universal connectors stop working
Three conditions reliably expose the limits of a connector catalog.
The source has no connector. A partner sends data as a CSV from an ERP the platform has never heard of. A supplier exports from a custom internal system that is not a SaaS product. A legacy tool from 1998 is still running and has no API. No catalog covers these cases, and the list of “sources the platform does not support” grows with every new partner.
The source has a connector, but the specific flow does not fit. The connector can extract records from the source, but the user needs a specific sliced-and-diced view that the connector does not expose. The user ends up writing custom code anyway, using the connector only for authentication.
The schema changes faster than the connector. A partner updates their API structure. The vendor’s connector catches up in six weeks. For those six weeks, the integration either breaks or silently ingests bad data.
These are not edge cases in most real integration work. They are the majority of it.
The AI-driven alternative
AI-driven mapping takes a different architectural approach. Instead of maintaining a catalog of pre-written connectors, the platform reads the source directly. It parses the format (CSV, JSON, XML, PDF, or a response from an arbitrary API), infers the schema from the data, and proposes how each field maps to the destination.
The source does not need a connector. The schema does not need to be documented in advance. The mapping adapts when the source changes.
This approach trades one cost for another. A connector catalog is a fixed library: the vendor pays to build it, the user gets a reliable interface for covered systems. AI-driven mapping has no fixed library: the user gets flexibility for any source, but the mapping output needs human review before production.
Side by side
| Dimension | Universal connector catalog | AI-driven mapping |
|---|---|---|
| Source coverage | Limited to the catalog | Any file or API |
| Setup per source | None if covered, custom if not | Upload, review, certify |
| Reliability for covered sources | High | Comparable after certification |
| Reliability for uncovered sources | Not applicable | Same as covered sources |
| Schema drift response | Vendor updates the connector | Drift detected, mapping update proposed |
| New source addition | Wait for vendor or build custom | Standard workflow |
| Best fit | SaaS-to-SaaS integration | Partner data, files, unfamiliar APIs |
The two are complementary, not competitive
Treating universal connectors and AI-driven mapping as competing choices misses the point. They solve different problems.
For SaaS-to-SaaS workflow automation, a connector catalog is the right tool. The source and destination are known, the schemas are documented, and the vendor handles the maintenance.
For partner data integration, files, PDFs, and undocumented APIs, AI-driven mapping fits. The source is unknown by default, the schema is learned from the data, and the platform adapts to formats that no catalog would cover.
Most organizations need both. An iPaaS for SaaS-to-SaaS. An AI-driven platform for partner data. Each is cheaper at its own job than forcing one to do the other.
Where datathere fits
datathere sits on the AI-driven mapping side of this divide. The platform reads any source, infers the schema, drafts the mapping and quality rules, and runs certified configurations on deterministic code. Adding a new source is a configuration task, not an engineering project.
For the SaaS-to-SaaS work, use an iPaaS. For the long tail of partner data, file ingestion, and undocumented APIs, datathere handles the mapping without a pre-built connector.
FAQ
How many connectors does a good integration platform need?
For SaaS-to-SaaS work, more is better within reason. Past around 500 covered systems, the diminishing returns kick in. For partner data work, connector count is the wrong metric. The right metric is whether the platform can handle sources without a pre-built connector at all.
Do custom connectors fill the gap?
They can, for a limited number of sources. The cost is ongoing: a custom connector is code that someone has to maintain as the target system changes. For more than a handful of custom connectors, the maintenance burden outweighs the integration value.
Is AI-driven mapping slower than a pre-built connector?
Per source, no. For a covered SaaS app, an iPaaS connector is hard to beat on time-to-first-integration. For a new partner with a CSV nobody has seen, AI-driven mapping is faster than either writing a custom connector or manually drawing mappings in a canvas.
What about partner data that also requires authentication and rate limiting?
Partner APIs that need OAuth, rate limiting, and retry logic are a legitimate gap in pure AI-mapping platforms. The better AI-mapping platforms include those capabilities alongside the schema handling, so the connector work gets consolidated with the mapping work rather than split across tools.
Can a connector catalog be AI-augmented?
Some vendors layer AI on top of their catalogs to help users compose workflows in natural language. This helps with authoring but does not extend the catalog. The AI still routes through pre-built connectors. For sources outside the catalog, the AI is stuck at the same limit.