Quantcast
Channel: SCN : Discussion List - SAP HANA and In-Memory Computing
Viewing all 5653 articles
Browse latest View live

Unable to Start/Stop Hana Tenant Database

$
0
0

Hi,

 

We are using SP09,Revision 96 HANA for our landscape.We have opened the HANA cockpit but unable to see Manage Databases app.

 

We need to start/stop Tenant Database but unable to do so as we get no option in HANA studio.Need your help.


Audit Logs for deactivated database user SYSTEM

$
0
0


Hi experts,

As per recommendation, we have deactivated the database user SYSTEM in our HANA database. Also, we have created and enabled an audit policy to capture all actions for SYSTEM as we have set it up as an emergency user (to be activated in times of need).

 

What kept me wondering though is that I am still seeing logs from the view AUDIT_LOG for user SYSTEM, mostly select statements. Why am I still seeing these logs when SYSTEM has already been deactivated?

 

Please advise. Thank you.

 

Regards,

ANG

Copy system schema content to another schema

$
0
0

Hi Team,

 

We have requirement where we want to Export the system schema full data (including indexes , Table , Function etc) and import the same to another schema say system1 .so that if we are going to change any table inside system1 schema ,should not effect the system schema,,,

What will the shortest and simplest method for doing this.

 

Your suggestion will be really appreciable .

 

Regards

Rableen

SAP Custom Development ported to HANA

$
0
0

Would be very interested in feedback on the following:

 

- I engage SAP custom development to write a substantial change to the Business Suite

- This gets me support on AnyDB

- I migrate Suite to HANA - is it still supported by SAP?

- Suppose it is now slow, and I use Solution Manager code inspector and this tells me why

- I implement the changes recommended and performance is now excellent - is this still supported by SAP?

 

More generally, is there a path for CD customers to move to HANA? Anyone successfully done this? Do we have to engage CD to do the remediation to retain support?

 

Thanks!

 

John

Performance issue in accessing SAP IQ data from SAP HANA

$
0
0

Hi Experts,

 

We have been struggling to bring data from SAP Sybase IQ in SAP HANA within permissible time limit.

For ex: It takes around 2 min and 15 sec to get 13 Million records from SAP IQ in HANA consisting fields like

Year, Month, Date, ShipTo, Amt. & Qty. Same model takes around 5 sec for said records if source is

HANA instead of IQ.

Also doing raw data preview on IQ virtual table in HANA bring data in microseconds, however if we drag the

Attributes in Analysis tab it takes more than 3 minutes.

 

Please suggest best/recommended approach for optimal performance to get IQ data in HANA.

 

 

Regards

Randhir Jha

Create Repository Role - MANAGEMENT_CONSOLE_PROC

$
0
0

We have SAP HANA SPS9.

I have started creating repository roles using the web ide. I understand that it is possible to create repository roles without first having to have the privileges one wishes to assign.

This worked just fine while I created a role for the DBA Cockpit. But when I tried to add the object privilege for the procedure MANAGEMENT_CONSOLE_PROC with EXECUTE, I got an error message, "insufficient privilege: Not authorized at ptime/query/..."

I ran an authorization check but the result was inconclusive.

 

Any suggestions that help me solve this problem will be much appreciated.

 

Cheers,

Martin

PAL Library Document Categorization

$
0
0

Hi everyone,

 

I have a problem related SAP HANA PAL Library, maybe one of you might help me.

 

I am trying to use Naive Bayes Document Categorization on my application. For training, I have a table which has a text column and a label column. And I have another table which has only text column (there is no label column since it gonna be predicted).

 

However, I couldn't find a specific function in PAL for this process. Yeah, I read related part about Naive Bayes in the PAL document, and it says "Naive Bayes works quite well in areas like document classification and spam filtering". But using NBCTRAIN and NBCPREDICT functions, I am only able to make normal classification. But you know, document classification is something different. I mean there should be some preprocessing like converting documents into vector space... etc.

 

To summarize, is there any specific function in SAP HANA to be used for document categorization?

 

Thanks,

Inanc

SAP HANA SQL Error Codes 1281,I want to know how to solve it

$
0
0

here is my procedure

CREATE PROCEDURE SingelRoute1(IN STARTSTOP NVARCHAR(40),

IN ENDSTOP NVARCHAR(40),

OUT OUT_ROUTE ROUTE)

LANGUAGE SQLSCRIPT READS SQL DATA AS

BEGIN

OUT_ROUTE= SELECT DISTINCT

SR1.STOP AS "起始站点",

SR2.STOP AS "目的站点",

SR1.ROUTE AS "乘坐线路",

ABS(SR2.POS-SR1.POS) AS "经过的站点数"

FROM

STOP_ROUTE SR1,

STOP_ROUTE SR2

WHERE

SR1.ROUTE=SR2.ROUTE

AND SR1.STOP=:STARTSTOP

AND SR2.STOP=:ENDSTOP

UNION ALL

SELECT DISTINCT

SR3.STOP AS "起始站点",

SR4.STOP AS "目的站点",

SR3.ROUTE AS "乘坐线路",

ABS(SR4.POS-SR3.POS) AS "经过的站点数"

FROM

DTOP_ROUTE SR3,

DTOP_ROUTE SR4

WHERE

'SR3.ROUTE'='SR4.ROUTE'

AND SR3.STOP=:STARTSTOP

AND SR4.STOP=:ENDSTOP ;

END

and i used following code call the procedure

call "TRAFFIC"."SINGELROUTE"('安美居','任家口');

but i faced a problem with the error code 1281:

Wrong number or types of parameters in call

how should i solve this problem?

i need your help.thanks very much!


HANA Merge and Optimize Compression process

$
0
0

Hi,

 

I'd love to know if anyone has any insight into MergeDog works. The best article I can find is 2 years old: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10681c09-6276-2f10-78b8-986c68efb9c8?overridelayout=t…

 

What I understand is that when you load, you load into the delta store, which is columnar, but isn't sorted or compressed, so inserts are fast. This has a penalty of read performance, so you periodically merge into the main store. Easy so far.

 

There is a token process - defaulting to 2 tokens per table by default (parameter token_per_table). You can force this, by using:

 

MERGE DELTA OF TABLE WITH PARAMETERS ('FORCED_MERGE' = 'ON')

 

This is supposed to use all available resources to merge, at the expense of system performance. In my system, it doesn't do this - instead using just 3 processes for the pre-merge check (which presumably evaluates which partitions/tables need merging) and then just one process for the merge itself. I have big tables, so the merge takes forever.

 

Now, some while after loading the tables, when the system is quiet, Mergedog wakes up and scans my tables again. It then goes and compresses the partitions using thread method "optimize compression". It is possible to force an optimize compression evaluation using:

 

MERGE DELTA OF TABLE WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'ON')

 

I guessed that syntax, it isn't in any reference guide I could find. But it only causes an evaluation, and it won't run if you have high system load anyhow.

 

So does anyone understand how this thing works, how to force an optimize compression, how to get it to use more cores and finish faster? And whilst we're there... what does optimize compression actually do? Does it improve query performance in most cases and does it generally improve compression? Presumably this depends on the data in the table, and whether the change in entropy means a different compression technique would make a difference? Why is it needed? Surely when the merge process happens, it could happen any time since HANA builds a new main store anyhow, so it could easily recompress using a different algorithm during every merge?

 

My guess is it reads the statistics of the table and defines a compression algorithm for the dictionary and attribute vector (runlength, prefix etc.) and then recompresses the table using the most appropriate compression technique.

 

This is all incredibly clever and in 99% of cases it means you never need to touch the system, it is self-tuning and requires no maintenance. But there are extreme circumstances (like the one I'm in) where I really want to control this process!

 

Guessing about the only person who can answer this is Lars Breddemann but would be fascinated by anyone who understands this process!

 

John

please let me know how to convert this in HANA Model

$
0
0

SELECT   coalesce(dims.IMS_ID, (select IMS_ID from DQR_OLAP.DIM_IMS where TEXT = 'N/A')) as DIM_IMS_ID,

                     coalesce(dco.COUNTRY_ID, (select COUNTRY_ID from DQR_OLAP.DIM_COUNTRY where COUNTRY_CODE = '00')) as DIM_COUNTRY_ID,

                     omr.ROLE_BITSET,

                     dcel.ACTIVE_CEL_STATUS_ID as DIM_ACTIVE_CEL_STATUS_ID,

                     coalesce(drev.REVENUE_ID, (select REVENUE_ID from DQR_OLAP.DIM_REVENUE where TEXT = 'Not Reported')) as DIM_REVENUE_ID,

                     dmas.MASTER_CODE_ID as DIM_MASTER_CODE_ID,

                    

                         --SW,20111026

                     coalesce(isc.ISC_ID, (select ISC_ID from dqr_olap.dim_int_sales_classification where text = 'Unavailable')) as DIM_ISC_ID,

                     coalesce(rbc.RBC_ID, (select RBC_ID from dqr_olap.dim_reg_buying_classification where rbc_text = 'Unavailable')) as DIM_RBC_ID,

                     coalesce(blg.BLG_ID, (select BLG_ID from dqr_olap.dim_buying_lifecycle_global where blg_text = 'Unavailable')) as DIM_BLG_ID,

                         --

 

 

                         --SW,20120518

                         omr.HAS_ISE,

                         --NG,20120907

                         omr.DIM_ORG_SEGMENT_ID ,

                        

                     count(*) as THE_COUNT

 

 

            FROM     DQR_OLAP.ORG_METRICS_RAW omr

                     left outer JOIN DQR_OLAP.DIM_IMS dims ON omr.IMS = dims.TEXT

                     left outer JOIN DQR_OLAP.DIM_COUNTRY dco ON omr.COUNTRY_CODE = dco.COUNTRY_CODE

                     left outer JOIN DQR_OLAP.DIM_ACTIVE_CEL_STATUS dcel ON omr.ACTIVE_CEL_STATUS = dcel.TEXT

                     left outer JOIN DQR_OLAP.DIM_MASTER_CODE dmas ON omr.SAP_MASTER_CODE = dmas.MASTER_CODE

                     left outer JOIN DQR_OLAP.DIM_REVENUE drev ON omr.REVENUE_USD = drev.REVENUE_ID

                    

                         --SW,20111026

                         left outer JOIN DQR_OLAP.DIM_INT_SALES_CLASSIFICATION isc on isc.TEXT = omr.ISC_TEXT

                         left outer JOIN DQR_OLAP.DIM_REG_BUYING_CLASSIFICATION rbc on rbc.RBC_TEXT = omr.RBC_TEXT

                         left outer JOIN DQR_OLAP.DIM_BUYING_LIFECYCLE_GLOBAL blg on blg.BLG_TEXT = omr.BLG_TEXT

                      

            group by dims.IMS_ID,

                     dco.COUNTRY_ID,

                     omr.ROLE_BITSET,

                     dcel.ACTIVE_CEL_STATUS_ID,

                     drev.REVENUE_ID,

                     dmas.MASTER_CODE_ID,

                    

                         --SW,20111026

                     isc.ISC_ID,

                     rbc.RBC_ID,

                     blg.BLG_ID,

                      

                         --SW,20120518

                         omr.HAS_ISE,

                         --NG,20120907

                         OMR.DIM_ORG_SEGMENT_ID;

 

I have created a Analytical view and its working fine ,but i have to incorporate the below like statements  from above query

 

coalesce(dims.IMS_ID, (select IMS_ID from DQR_OLAP.DIM_IMS where TEXT = 'N/A')) as DIM_IMS_ID, this statments whihc is stopping me to go forward

 

Please give me any ideas

Not able to apply filter using input Parameter

$
0
0

Hi Experts,

 

I am applying filter on a column using a input parameters but facing an error as shown below

 

Filter condition:

 

if('$$In_Opr_System_Status$$' !='*',in("SYS_STATUS",'$$In_Opr_System_Status$$'), match("SYS_STATUS",'$$In_Opr_System_Status$$')  )

 

I want to apply filter such that, when there is no entry made to the input parameter then all the values should be fetched for the column else it should fetch only entered value. Input parameter i am using is single input and data type of column where i am applying filter is NVARCHAR

 

Below is the error i am getting when tried to do data preview

 

Error: SAP DBTech JDBC: [2048]: column store error: search table error:  [2620] executor: plan operation failed


In one of the post it was suggested to enter client number/ make it session client in the View properties but still facing same error


Please help me to fix the issue

 

Regards,

Nag

HANA User locked, how to unlock?

$
0
0

hello,

 

HanaStudio.png

 

recently, I got AHANA Access, I am learning HANA Modeling Information Views,

 

but in the Add System Area it showing a massage that is "Secure Storage is locked ". I am unable to understand what is this ? and how resolve this problem?

 

it is my first experience to see it. and it does not show add systems also.

 

i attached screen shot , that would give better view of the problem.

 

Please help me to resolve this problem.

 

VanDana

Error while creating a procedure

$
0
0

Hi Experts,

 

         I'm a bit new to SQLScripting and trying to explore the language syntax.  I'm taking a simple example of a procedure where i select 3 fields of LFA1 table for country = 'US'.  But i get below error while creating procedure.  Could you please guide where i'm going wrong.

 

create type t_lfa1 as table

(

   lifnr nvarchar(10),

   name1 nvarchar(38),

   adrnr nvarchar(10)

);

 

 

create procedure t_test1

( out t_out1 "EC2SLT2"."T_LFA1")

language sqlscript as

begin

    t_out1 = select lifnr name1 adrnr from "EC2SLT2"."LFA1"

             where land1 = 'US';

end;

 

ERROR:

 

SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near "adrnr": line 5 col 33 (at pos 123)

HANA pagination - Navigate forward and backward through result set

$
0
0

Hi folks,

 

Suppose I have a query that returns a potentially large result set, such as

 

SELECT * FROM BUT000;

 

or

 

SELECT

    OPBEL,

    VKONT,

    VTREF,

    HVORG,

    TVORG,

    SUM ( BETRH ) AS BETRH

FROM DFKKOP

GROUP BY OPBEL, VKONT, VTREF, HVORG, TVORG;

 

I don't want to overwhelm the client (front-end, service consumer, ABAP application using a secondary database connection, whatever) with the entire results set but retrieve and display it in chunks of 100. I want to allow forward and backward navigation in the result set: Show me the first 100, then the second 100, the third 100, then back to the second 100, and so on.

 

I've found two things that come close to what I want:

 

1) SELECT ... LIMIT ... OFFSET

 

SELECT * FROM BUT000 -- 1st 100

  LIMIT 100;

SELECT * FROM BUT000 -- 2nd 100

  LIMIT 100 OFFSET 100;

SELECT * FROM BUT000 -- 3rd 100

  LIMIT 100 OFFSET 200;

SELECT * FROM BUT000 -- 2nd 100 again

  LIMIT 100 OFFSET 100;

 

Disadvantage: Each time I want to retrieve a package, the query is executed again.

 

2) ADBC package handling

 

  DATA:

    lr_sql      TYPE REF TO cl_sql_statement,

    lr_result   TYPE REF TO cl_sql_result_set,

    lr_results  TYPE REF TO data.

 

  CREATE OBJECT lr_sql

    EXPORTING

      con_ref = cl_sql_connection=>get_connection( 'SECONDARY' ).

 

  lr_result = lr_sql->execute_query( |SELECT * FROM SAPDAT.BUT000| ).

  GET REFERENCE OF lt_results INTO lr_results.

 

  lr_result->set_param_table( lr_results ).

 

  DO 3 TIMES.

    CLEAR lt_results.

    lr_result->next_package( 100 ).

  ENDDO.

 

  lr_result->close( ).

 

Disadvantage: can only navigate forward to the next package, no backward navigation or free positioning available.

 

I'd be happy with solutions that solve the problem at the HANA/SQL Script level or at the ABAP level. Who can help?

 

Thanks,

 

Thorsten

How to consume virtual table with input parameter?

$
0
0

Dear all,

 

We have 2 HANA systems, the remote source has been configured in system B thru smart data access, so that I am able to create the virtual table in system B from a calculation view with an input parameter in system A:

 

CREATE VIRTUAL TABLE "SYSTEM_B_SCHEMA"."Z_VIRTUAL_TABLE" AT "SYSTEM_A"."NULL"."PUBLIC"."SYSTEM_A_SCHEMA::CA_VIEW";

 

I am also able to use select statement below to select data from the virtual table with input parameter:

 

SELECT * FROM SYSTEM_B_SCHEMA.Z_VIRTUAL_TABLE ('PLACEHOLDER' = ('$$INPUT_PARAMETER$$', 'abc'));

 

But when I was trying to create a calculation view in system B based on this virtual table, I do not see any place to maintain/map the input parameter.

 

Therefore I have below 2 questions:

 

1. If I create a calculation/analytic view based on a virtual table with input parameter, how to maintain/map the input parameter?

 

2. Is there any BW data provider (like virtual provider, open ODS view,etc) that can be built on a virtual table with input parameter? How to maintain the input parameter in the BW data providers?

 

 

Any post or document will be appreciated!

 

Thanks and best regards,

Tim


Extreme high cpu load (2998%) hbdindexserver

$
0
0

Hello gurus

 

One of our servers running HANA has an extreme high cpuload.

All our CPU's are on 100% and when I run the top command I get the following result:

Screenshot-9-12-2015 16.57.35.png

HANA studio is not accessible anymore. We've already tried rebooting the server and database...

 

Do any of you have a suggestions or had a similar issue?

 

Many many thanks in advance,

Kind regards,

Bart

Unable to Open Calculation View Cube in Design Studio or Analysis

$
0
0

Hi All,

 

I have a Star Join calculation view with one fact table joining to 6 dimension views. I'm able to activate and deploy the view with no errors, and I can view the data using Data Preview Editor and using SQL Console. The error happens when I try to create a report in Design Studio. In the Edit Initial view I get the prompt screen, once I make the data selection, I get this error message:

 

 

Analysis Application


Unknown Error
While processing the current request, an exception occurred which could not be handled by the application or the framework.

Log ID: 6dc57303-1df9-4644-85d1-212b9a6fc70c

 

Unable to process request. Contact your system administrator.
To facilitate analysis of the problem, keep a copy of this error page

 

 

We are sorry for the inconvenience

 

If I start to remove some of the dimension, then the view works which leads me to believe this might not be a join error.

 

Any help is appreciated. 

HDB procedure with TABLE output

$
0
0

Hi folks,

 

I've had much success using HDB procedures with SCALAR inputs and outputs however I'm tinkering with TABLE type output and I can't seem to get it to work unless I have a physical table that is bound to the output variable.

 

Does hdb procedure with TABLE output require that the table physically exists or is it treated more like a temp table?  What I'm trying to do is create an HDB procedure and consume it via ABAP but I can't seem to do it without creating a physical table that is bound to the procedure.  If many people are calling the same abap program that would be an issue.  Essentially I want to create a session specific table that only exists in memory but pass that to the output of the procedure.  Is this possible?

 

Thanks,

-Patrick

What is the differences between different service packs in SAP HANA

$
0
0

Hi All,

 

what the new features added and what are rectified in  different service packs of HANA

 

Please Advise

 

Thanks

Krishna

HANA Calculation View temporal Join

$
0
0

Hi all,

 

I have a question: I want to join to tables in a graphical calculation View to get FIELD1 in TAB1:

 

 

Join.JPG

 

Like this:

if tab2-weekday = tab1-weekday

and tab2-time_from GE tab1-time

and tab2-time_to LE tab1-time

-> GET FIELD1

 

Has anyone an idea, how I can do this with a graphical calculation view?

 

Best Regards

Thorsten

Viewing all 5653 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>