Hello All,
I am currently part of a POC related to HANA-Sybase Integration.
We need huge amount of data to be loaded into some HANA tables(in fact 5 tables shown below).
Currently the record count in the tables are:
DIVIDEND_EVENT 387
SPLIT_EVENT 11005
STOCK_HISTORY 2840000
STOCK_QUOTE 11548892
STOCK_TRADE 5775442
The data needs to be increase proportionally across the 5 tables to ~2 billion rows in the STOCK_TRADE table.
All the tables have a common object between them, which is INSTRUMENT_ID(Commodity Market scenario)
Another constraint here is the data must be consistent (referential integrity to be maintained) between the INSTRUMENT_ID of the tables.
The current record count is related to year 2005. When I increase the data set, I need to uniformly distribute it across 10 years(2005-2015)
I wanted your help in deciding a Best/recommended approach by which we can load such massivemeaningful dummy data into HANA tables.
Has anyone faced a similar situation somewhere?
PS: The above shown records were loaded though various CSV files into HANA by some other colleagues of mine, couple of years ago.
These CSV files are readily available with me.
Some thoughts from our side on the solution side: Excel formulas/Macros- Import/Export CSV Files; SQL procedure; Scripting at HANA Linux side..
BR
Prabhith