Wednesday, March 7, 2007

SAPBI monitoring


BW MONTORING




1. Introduction

2. Monitoring of Process Chains
3. Data Load Monitoring

- Process Overview
- Job Overview
- Short Dump
- Transactional RFC
- ALE Management
- System Log
- Check Connection

4. Lock Objects (In Source System)

5. Table Space Overflow (ODS Activation failure)

6. Standard background jobs for collecting the
System statistics

7. Indexes maintenance and statistics update for infocubes

8. EarlyWatch alert


INTRODUCTION


Basic definition of monitoring is nothing but the system should run smoothly without causing any problem to the activities which has to be run in the system for the basic business needs. Always a Pro-active monitoring helps for the better solution rather reactive monitoring.

The following document gives an overview of the various methods of monitoring in the BW system. It helps in scrutinization/examination of the issues (like failing of jobs, IDOC’s missing, RFC connection, Missing of data and incorrect data format in source system, Object lock, table space overflow etc..) which may arise in the system sometimes and resolving it by proper monitoring, this enables us to keep the system safe and running properly atmost all the time without any hindrances to business.


MONITORING OF PROCESS CHAINS

A process chain is a sequence of processes that wait in the background for an event. Some of these processes trigger a separate event that can start other processes in turn.

Use
In an operating BW system there are a multitude of processes in addition to the loading process that occur regularly.
Use of process chains,
Automate the complex schedules in BW with the help of the event-controlled processing,
Visualize the schedule by using network applications, and
Centrally control and monitor the processes.

RSPC is the transaction is used to schedule the jobs under process chains. RSPCM is the transaction to monitor the process chains.
Process chain monitoring.
The following screenshots helps us to know the monitoring of the process chains.
Scenario 1: The below screen shot will show the process chain for data loading object
GOTO à RSPCM Transaction. It will display the following screen
Check the status in the first column whether all the process chains having the status green in colour.

Statuses will be in three different colours:

Green: Success
Yellow: Running
Red: Error

If the status in yellow in colour, click on that process chain

Ex: In this screen shot click on the ZPC_FL_MD_SD process chain

Then will get the following screen

In the above screen click the Load data which is in yellow colour (that means it is running) and then right click on it and select the option Displaying messages and then it will display a pop-up box which shows the status of the job.
Ex: Whether it is still running/Successfully completed etc…

Incase if you want to look into the data load monitor screen (RSMO) click on the process monitor button in the pop-up box [OR] if you want to see the job overview[SM37] of this load in source system click on the tab Backg and then click Batch monitor button.

Scenario 2: The below screen shot will show the monitoring of the process chain for the jobs which doesn’t belong to loading of data but the background job which will be scheduled for ODS activation.
Click on the process chain which is in running state in the initial screen of RSPCM and select the ODSO data as it shown in the below screen shot and then right click on it and select the displaying messages
Then a dialog box appears and in which you can see the log status of the background job
When you click on the batch monitor it will take you to the SM37 (Job Overview) screen and if the job fails then analyze why it is failed by clicking the job logs.
Scenario 3: Process chains may fail sometimes.

Steps to restart the Process Chain:

Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on. Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.

In the opened popup click on the tab 'Chain'.
In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
1. Copy the variant from the popup to the variant of table rspcprocesslog 2. Copy the instance from the popup to the instance of table rspcprocesslog 3. Copy the start date from the popup to the batch date of table rspcprocesslog Press F8 to display the entries of table rspcprocesslog. Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
1. rspcprocesslog-log_id -> i_logid 2. rspcprocesslog-type -> i_type 3. rspcprocesslog-variant -> i_variant 4. rspcprocesslog-instance -> i_instance 5. enter 'G' for parameter i_state (sets the status to green).
Now press F8 to run the fm.
Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.

DATA LOAD MONITORING

While discussing about process chains we came across the process monitor where it takes us to the data load monitor screen of the selected load [refer to page 7] but to see all the data loads in the system use RSMO transaction where we can see the all the Data loads in the system

GOTO à RSMO

It will display all the loads which are successfully finished / still running/ terminated. Status can be determined by the lights i.e.
Green – Successfully finished
Yellow – Still running [OR] Get stucked due to some problem
Red – Error

For example in the following screen shot we can see that the load with status still running but it also displays the message that the IDOC’s are not arrived from source system.

Steps to monitor IDOC Status in PRD:

As the load is coming from the PRDCLNT510 login into that and

1. Execute the transaction WE02.
2. Select the date and Basis Type as RSR*
3. Check if any IDOC is in status 64 that has got Destination as BWP.
4. If so copy the IDOC number and then go to BD87 and paste the IDOC
number, select it and process it (F8).
5. Also go to monitor in BWP to make sure all our jobs are successful.

Along with this some other issues like
Job termination at source system, [Check with SM37]
Sometimes the TRFC’s will not get executed in the source system [SM58]
Problem in the RFC connection [SM59]
ALE problem [WE02 and BD87]

The RSMO transaction is also helpful in going to the exact location of the error by clicking the Environment option from the menu bar, there we can select option like process overview [SM50], Job overview [SM37], Short Dump [ST22], TRF’s [SM58], ALE Management [WE02], System Log [SM21] etc. and then select the system whether the problem occurs in the source system or data warehouse.

LOCK OBJECTS

Scenario:
The Data which is coming from R/3 to BW may not transfer because while data transferring some of the objects may be locked/edited by the developers.

Solution:
To check the lock objects goto the transaction SM12 and delete the lock of the required object and start the load again in BW.

MEMORY MANAGEMENT

Scenario:
ODS activation may fails due to insufficient memory.

Steps to find the problem:
Check the error log can in the transaction SM37 with job name “BI_PROCESS_ODSACTIVAT” and check the system log entries in SM21 & also analyze the short dump in transaction ST22 which will help us in resolving the problem.

Solution:
Increase the size of the data base by adding a data file through MS SQL enterprise manager so that it will provide enough space and results to successful activation of ODS.

STANDARD BACKGROUND JOBS FOR COLLECTING THE
SYSTEM STATISTICS

SAP has provided the standard background jobs which collects and updates the overall system statistics and few jobs which clean up the some unwanted data which are older by this it makes the system faster by improving the performance. Standard jobs are also called as House Keeping Jobs.

Some of the Standard Jobs:

SAP_CCMS_MONI_BATCH_DP
SAP_COLLECTOR_FOR_JOBSTATISTIC
SAP_COLLECTOR_FOR_NONE_R3_STATISTIC
SAP_COLLECTOR_FOR_PERFMONITOR
SAP_REORG_JOBS
SAP_REORG_ABAPDUMPS
SAP_REORG_BATCHINPUT
SAP_REORG_JOBSTATISTIC
SAP_REORG_SPOOL
SAP_REORG_UPDATERECORDS
SAP_REORG_XMILOG

The standard Job SAP_REORG_JOBS has been defined daily and it deletes all the jobs with status (Scheduled, Released, Finished and Cancelled) which are older than 10 days.

Scenario: Sometimes the total number of entries in the tables TBTCO, TBTCS and TBTCP increases and this will cause space issue for the incoming jobs in these tables.

Solution: Schedule the Job Z_SAP_REORG_JOBS (Custom) with variant ZVAR_BTC it will delete the jobs which are older than 7 days from TBTCO, TBTCS and TBTCP. Therefore it provides space for the jobs which are Scheduled, Released, Finished and Cancelled.

Incase if the same problem occurs again in the future we need to reduce the number of the days.

GOTO à SE38 and give the program name RSBTCDEL2 and click on variant à Enter the variant ZVAR_BTC and click on change à Give the number of days which you required --> Save à Schedule the above job daily with variant ZVAR_BTC in SM36.

The following screen shot helps in defining/scheduling these standard jobs.

INDEXES MAINTENANCE AND STATISTICS UPDATE FOR INFOCUBES

Index means the pointer to the database records. It makes the faster access of data. We need to maintain indexes to the infocubes for better performance.

The type of indexes and the administration possibilities can be quite different in SAP BW compared to R/3

Creation, maintenance and monitoring of indexes has to be done regularly to avoid any missing and degenerated indexes.

The following screen shot helps to check and recreate the indexes.

GOTO – RSA1 Transaction and select the modeling tab in the right panel à Click on the infoprovider à select the required cube for monitoring the indexes à Right click on the cube à Click on the manage

Check the colour of the button which is right to the check indexes option (See the below screen shot). If it is green in colour that means indexes are fine but if it is red in colour we need to check and repair indexes.

To automate this job click on Create Index (Batch), it will schedule a job “BI_PROCESS_INDEX”. This job will take care creation of indexes

Calculate Statistics:

For database performance reasons, it is essential to ensure that the database optimizer statistics are collected at the optimum time and in the recommended way. Collect the database optimizer statistics after changes in the data. In a BW system, this means that optimizer statistics should be scheduled after the data loading.

It can be the case that there is not enough time to run a system wide statistics collection after the data loading has been completed. This might be due to the fact that the users start running queries directly after the data loads have been completed or that certain loads take much longer to complete than others. If this is the case then consider integrating a statistics collection method into the relevant Process Chain. Statistics collection comes as a standard process in the process chain maintenance.

We can check whether statistics are up to date for each infocube.

GOTO – RSA1 Transaction and select the modeling tab in the right panel à Click on the infoprovider à select the required cube for monitoring the indexes à Right click on the cube à Click on the manageà enter the percentage of Infocube data used to create statistics. [Default will be 10%]

Click on the check statistics and then click on the refresh Statistics inorder to get the current statistics.

To automate this job click on the Create Statistics (Batch), it will schedule a job which can be monitor in SM37.

Incase you want to update the statistics for the tables at database side i.e., on the MS SQL server

The following Screen shot helps in running the stats at database side

Enter the name of the job à click on the steps tab à Type the exact syntax
[Update Statistics table] and you can select whether it has run for once or periodically.

EARLYWATCH ALERT:

The SAP EarlyWatch Alert service can be used to facilitate BW maintenance. It offers a unique application and basis perspective on the system. Utilize the SAP Earlywatch alert as part of the system monitoring strategy. It is also a good information source and checklist which can be used in meetings between the application and basis teams.

Transaction SDCC (Service Data Control Center) is used to open the EarlyWatch session by scheduling the job under the option Task processor.