Benchmarking on Tables with More Than 2 Billion Entries
Note that benchmarking does not work on tables with more than 2 billion entries. To overcome this issue, you have some options that you can carry out beforehand.
If you want to run a migration benchmarking on a system with tables that have more than 2 billion entries, an error can occur similar to the following:
A1EEIMP 007 Task failed with error: (IMP) DbSlExeModify failed in insert for table '<tablename>' with 99
A1EEIMP 007 (IMP) SQL error = 2055
A1EEIMP 007 (IMP) maximum number of rows per table or partition reached
To avoid the error and to overcome this limitation, choose one of the following options:
Partition Manually
You set a breakpoint for phase HDB_LANDSCAPE_REORG2 and manually partition the relevant tables with the following statements:-
Log on to the source database with the technical SQL user of the SAP system
-
Determine the row count (<row_count_source>) :
SELECT COUNT(*) FROM "<table_name>"
- Connect to the SAP HANA target database for the following statements using
the SAP HANA Studio or SAP HANA Cockpit:
-
Calculate the number of partitions:
select FLOOR(<row_count_source> /1000000000)+1 from dummy;
Replace <row_count_source> with the actual number of rows in the source database.
We recommended creating one partition per 1 billion rows, for example, 6 partitions for 5.2 billion rows.
-
Determine the partition columns. Choose all primary key fields as partition fields.
Determine the primary key columns with:
SELECT COLUMN_NAME FROM INDEX_COLUMNS WHERE TABLE_NAME = '<table name>' AND CONSTRAINT = 'PRIMARY KEY'
-
Alter table with Hash Partition: ALTER TABLE "<table_name>" PARTITION BY HASH(<primary key column list>) PARTITIONS <num_partitions>
Example for a table TAB1 has the primary key fields COL1, COL2 and COL3, and where 6 partitions are created:
ALTER TABLE "TAB1" PARTITION BY HASH("COL1", "COL2", "COL3") PARTITIONS 6
-
-
-
Use report SMIGR_CREATE_DDL
The report SMIGR_CREATE_DDL exists on systems based on SAP_BASIS 731 or higher. Each of which must also have a sufficient support package level.
Run the report to create the following required files:-
SQLFiles.LST (= text file containing a list of all SQL files generated)
-
<package_name>.SQL (= files containing SQL statements for the respective package)
Copy the files into the download directory before you start the benchmarking run.
-
-
Use files from a previous SUM run
After a complete SUM procedure with DMO on the same system, the relevant files SQLFiles.LST and <package_name>.SQL (see also option 2), are located in the var directory. Copy them into the download directory of SUM. Afterwards you can start the benchmarking.
