Connecting to SAP systems
There are several ways to bring SAP data into the DQC Platform. The right choice depends on your landscape (ECC, S/4HANA, BW/4HANA, Datasphere), the data volumes you need to validate, and whether you want to stay inside the SAP application layer or read directly from the database.
At a glance
Option | Best for | Layer |
|---|---|---|
SAP HANA | SAP BW / BW/4HANA, SAP Datasphere, custom HANA-backed data marts; analytical workloads, large volumes | Database |
SAP OData | Transactional data exposed via SAP Gateway / S/4HANA; honours PFCG roles | Application |
Third-party connectors | Scenarios where neither HANA nor OData is sufficient — e.g. ABAP CDS views, BAPIs, delta extracts | Mixed (extracted to SQL) |
SAP HANA — direct database access
DQC connects directly to a SAP HANA database with a read-only technical user. Conceptually this is the same approach Power BI takes when it talks to SAP: the application layer is bypassed and data is read straight from the HANA database engine.
Ideal for:
SAP BW and BW/4HANA environments
SAP Datasphere environments
Custom HANA-backed data marts and reporting layers
Large analytical workloads where high throughput matters
Trade-off: you bypass any SAP application-layer logic (validations, virtual fields, ABAP code) — make sure the data model in HANA already represents what you want to validate.
→ Detailed setup: Connection to SAP HANA
SAP OData — stay in the application layer
OData services exposed by SAP NetWeaver Gateway, S/4HANA or BW expose tables and entities through HTTP. Authorisation is controlled by SAP itself via PFCG roles, so DQC's access stays inside the established SAP authorisation model.
Ideal for:
Transactional / master data that is already exposed as an OData service
Smaller datasets where SAP business logic must be honoured
Organisations that prefer to grant access via SAP roles rather than via a dedicated DB user
Trade-off: OData has higher per-request overhead than direct HANA access, so it is less suited to very large extracts. Prefer OData v4 over v2 for better schema metadata.
→ Detailed setup: Connection to OData
Third-party connectors (e.g. Theobald)
When neither direct HANA access nor OData is sufficient — for example because you need ABAP CDS views, BAPIs, or delta extraction logic — third-party tools can bridge the gap between SAP and DQC.
DQC has one hard requirement here: the third-party connector must expose its data as a SQL interface. DQC then reads that SQL endpoint via one of its standard connectors (PostgreSQL, Microsoft SQL Server, Snowflake, etc.). From DQC's perspective it is a normal SQL data source.
Suitable tools include, for example:
Theobald Software (theobald-software.com) — extracts SAP data and either provides a SQL/JDBC endpoint or pushes it into a target database
Other ETL / replication tools that materialise SAP data into a SQL-accessible target
Typical pattern: configure the third-party tool to write SAP data into a staging schema (PostgreSQL, MS SQL, Snowflake, …) and connect DQC to that staging schema via the matching standard connector.
Best practices for SAP connections
Use a dedicated, read-only technical user — never a personal account
Whitelist the DQC platform's outbound IP if your network policies require it
Limit access to the schemas, entities or tables that are actually needed for data quality checks
For HANA: configure a workload class with memory, thread and timeout limits to prevent runaway queries
For OData: prefer
v4overv2for better schema metadata and performanceDocument which approach is used per data domain (e.g. master data via OData, analytics via HANA) so the rationale stays visible to future operators