What Is SAP Real-time Data Platform?
The SAP Real-time Data Platform is a flexible deployment environment for managing OLTP, analytical, and big data usage types, in-memory and on-disk, on-premise and in the Cloud.
What value does it deliver?
- Flexible deployment options: storage, transaction type, hosting, and data source
- Support for advanced applications that can mix OLTP, analytics, and big data in real-time
- Extreme acceleration for big data and fast movement of data
Traditional database management systems are designed for optimizing performance on hardware with constrained main memory. Disk I/O was the main bottleneck. The focus was on optimizing disk access, for example, by minimizing the number of disk pages to be read into main memory during processing. The SAP HANA database is designed from the ground up around the idea that memory is available in abundance, considering that roughly 18 billion gigabytes or 18 exabytes are the theoretical limits of memory capacity for 64-bit systems, and that I/O access to the hard disk is not a constraint. Instead of optimizing I/O hard disk access, SAP HANA optimizes memory access between the CPU cache and main memory. SAP HANA is a massively parallel (distributed) data management system that runs fully in main memory, allowing for row- and column-based storage options, and supporting built-in multitenancy.
SAP HANA serves as a foundation to develop future in-memory analytic and transactional applications. The SAP HANA database can potentially provide performance improvements for existing SAP Applications where, for example, SAP applications that use Open SQL can run on SAP HANA without changes.
New applications developed natively on SAP HANA and powered by SAP HANA can improve performance of business process and analytical scenarios. Application development techniques optimized for parallel in-memory processing can take advantage of new enterprise data management and application development logic to fully exploit advances in hardware technologies.
SAP HANA DATABASE ADMINISTRATION
The administration console of the SAP HANA Studio provides an all-in-one support environment for system monitoring, backup and recovery, and user provisioning.
In case of scenarios like data center failures due to fire, power outages, earthquake, and so on, or hardware failures, such as the failure of a node, SAP HANA supports the hot-standby concept using synchronous mirroring with a redundant data center concept, including redundant SAP HANA Databases. This is in addition to the cold-standby concept using a standby system within one SAP HANA landscape, where the failover is triggered automatically.
SAP HANA is an ACID-compliant database supporting Atomicity, Consistency, Isolation, and Durability (ACID) of transactions. In addition to recovery for Online Analytical Processing (OLAP), SAP HANA also provides transactional recovery for Online Transactional Processing (OLTP) through the administrative console in the SAP HANA studio.
Currently, supported processes are:
- Recovery to last data backup
- Recovery to last and older (previous) data backup
- Recovery to last state before crash
- Point-in-time recovery
User provisioning is supported with authentication, role-based security, and analysis authorization using analytic privileges, which enable security for analytical objects based on a set of attribute values.
The administration console in the SAP HANA studio enables the version control mechanism for models of SAP HANA and SAP Business Objects Data Services. Supported is the export and import function of XML files, which can be used to back up and restore versions of models. SAP HANA can be run in a single production landscape, especially if the initial use-case scenario is not business critical and the data load performance for the initial load is acceptable for reloading the data. However, we recommend aligning the SAP Landscape Transformation and SAP Business Objects Data Services environment with the existing source system landscapes. In an enterprise-grade business support mode, SAP HANA needs to run in standard SAP development, quality assurance and staging, and production environments
The table summarizes the benefits offered by specific features of the SAP HANA database.
SAP HANA Database Feature
Large memory footprint
Greater computation power
Faster than disk
Row and column store
Faster aggregation with column store
Highly dependent on actual data used
Analysis of large data sets Complex computations
No aggregate tables
No data duplication
Insert only on delta
Fast data loads
The SAP HANA database manages data in a multicore architecture for data distribution across all cores to maximize RAM locality using scale-out (horizontally) and scale-up (vertically) functionality.
In the scale-out scenario, the SAP HANA database scales beyond a single server by allowing multiple servers in one cluster. Large tables can be distributed across multiple servers using round-robin, hash, or range partitioning, either alone or in combination. SAP HANA has the functionality to execute queries and maintain distributed transaction safety across multiple servers.
Specific server configurations for SAP HANA deployments are the responsibility of certified SAP technology partners. These partners can balance better performance per CPU at lower cost, enabling customers to take advantage of larger memory address spaces, lower data-center operating costs, and simpler management.
One of the major contentions and the reason for slow performance in traditional DBMS is locking data when data updates are being performed. SAP HANA avoids this issue and enables high levels of parallelization using insert-only data records. Instead of creating new records in a database table, deltas are inserted as net-new entries in existing records stored in columns.
Using columnar data stores, SAP HANA can achieve major compression rates unheard of in traditional databases. On one example, the analysis of SAP customers’ systems showed that only 10% of attributes in a single financial database table was used in an SQL statement, shrinking the actual size of data volume to be accessed from 35 GB intraditional Relational Database Management System (RDBMS) storage to 800 MB in a column-store design, just over 2% of the volume in the traditional storage. As this example shows, much higher compression rates will be accomplished with high-sparsity data than with dense data.