Can any one tell the base tables in R/3 for the standard content 2LIS_03_BF & 2LIS_03_UM? Are there any other tables than MSEG, MKPF, BSEG, BKPF? I am trying to understand how the PROCESSKEY, STOCKRELEV, STOCKCAT, INDSPECSTK, CPPVLC & CPQUABU fields are populated from the above tables in R/3. Any pointers are highly appreciated.
Thanks for the quick response. I am trying to simulate the exact extraction of 2LIS_03_BF and 2LIS_03_UM just to make sure the data that is given by the standard business content is same to my generic extractor. While creating the generic extractor, I am not able to understand how the processkey and other fields (which I mentioned in my earlier mail) are derived in the program behind the screens.
Sap has provided us the standard Logical Extractor which are used to extract data related to application Purchasing, Inventory Controlling, Shop Floor Control, Sales data, Delivery data etc. The Transaction LBWE is used to direct to LO Cockpit where all the data sources belongs to Logical Extractor are listed there. PFB the screen shot of LO Cockpit. In this Document we will try to give info on each of the objects listed in screen shot. I will try to give some info on the numbers below, Update Mode Job Control, Queued Delta and many more.
When we create some sales order or create some Purchase order and press save then we get some successful message that your sales order has been successfully created or it may result in error or sales order may not create at all.
V2 Updates: As soon as the V1 updates get over then system start searching for V2 update for updating the statistic tables. The same data is written to Statistic table. This is the tables which can capture data for reporting. For example the LIS table S*** Store sales data it will contains the same data as VBAK and VBAP but in different structure to optimize the reporting.
TMCEXCFS: This table signifies that which all the fields from communication structure is chosen or active in extract structure of data source. PFB the screen shot below I have selected the extract structure as MC11VA0HDR as extract structure which is extract structure for Data Source 2lis_11_VAHDR. The third screen shot below shows the fields which are active or chosen for extract Structure.
TMCEXCFZ: This table is similar to above but it signifies the fields which we have added to delivered version extract structure during its enhancement. It also tracks if we have added any ZZ fields in extract structure. PFB the screen shot for MC11VA0HDR extract structure.
Direct Delta: With this update, Data is transferred directly to BW delta queue with each document posted in same LUW with V1 updates. Now when there is any delta request from BW this directly hit the BW delta queue and delta data is fetched.
Unserialized V3 Updates: We rarely used direct delta method due to performance issue most of the time we use V3 update and Queue Delta. Now in this method Data is transferred to Update Queue with V3 call and is kept there until the collective Job is run to fetch data from update Queue to Delta queue. This method does not ensure serialization of document data. Update to DSO object is not recommended if serialization is desired.
I was recently working in Invoice data extraction from ECC to BW.I was implementing the standard BC objects of 2LIS_06_INV however there is no mappings available between the Datasource and the Infosource from the standard contents. I tried in lot of SDN threads and many SAP help portals but i couldn't find the mappings between them and i saw some threads this BC is incomplete.Finally i found the answer and i thought it would be helpful for our folks who is searching for these mapping details now or in near future.
These mappings to the Infosource is based on the DSO - 0LIV_DS01; but still we can use this mappings and the data flow to get the Invoice details for the SRM standard DSO 0SRIV_D3 ( This is method will be applicable to the Extended Classic Scenario in SRM )
Check the data successfully got loaded into cube. Go to Manage screen Click on Collapse tab and make sure No Marker Update should not be checked, enter the request id and click on Release. Now data got compress with marker update
Login into BI Side Select and expand the tree of 0IC_C03 Select the infopackage of (2LIS_03_BF) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03.
Check the data successfully got loaded into cube. Go to Manage screen Click on Collapse tab and make sure No Marker Update should be checked, enter the request id and click on Release button. Now data got compress without marker update
Login into BI Side Select and expand the tree of 0IC_C03 Select the Init infopackage of (2LIS_03_UM) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03.
Check the data successfully got loaded into cube. Go to Manage screen Click on Collapse tab and make sure No Marker Update should be checked, enter the request id and click on Release button. Now data got compress without marker update.
Login into BI Side Select and expand the tree of 0IC_C03 Select the Delta infopackage of (2LIS_03_BF) and Trigger the Infopackage to load the data upto PSA and then Trigger the DTP to load the data from PSA to ZIC_C03.
Check the data successfully got loaded into cube. Go to Manage screen Click on Collapse tab and make sure No Marker Update should not be checked, enter the request id and click on Release button. Now data got compress with marker update.
UD Connect (Universal Data Connect) uses Application Server J2EE connectivity to enable reporting and analysis of relational SAP and non-SAP data. To connect to data sources, UD Connect can use the JCA-compatible (J2EE Connector Architecture) BI Java Connector.
1. UD Connect Source: The UD Connect Sources is the instances that can be addressed as data sources using the BI JDBC Connector.UD Connect Source Object You have installed the J2EE Engine with BI Java components. For more information, seethe SAP NetWeaver Installation Guide on the SAP Service Marketplace at service.sap.com/instguides. UD Connect source objects are relational data store tables in the UD Connect source.
You can now integrate the data for the source object into BI. You now have two choices. Firstly, you can extract the data, load it into BI and store it there physically. Secondly, provided that the conditions for this are met, you can read the data directly in the source using a VirtualProvider.
BI JDBC Connector is a JCA-enabled (J2EE Connector Architecture) resource adapter. It implements the APIs for the BI Java SDK and allows you to connect various data sources to the applications you have created using the SDK. You can also use BI Java JDBC Connector to make these data sources available in SAP BI systems (by means of UD Connect), or to create systems in the portal to use in Visual Composer scenarios. The following diagram outlines potential usage scenarios for BI Java Connectors:
As illustrated, you can use BI Java JDBC Connector to create systems for use in four different scenarios. Since BI Java JDBC Connector is part of SAP Universal Data Integration (UDI), these are often referred to as UDI scenarios:
You can build custom Java applications based on data in systems created with BI Java Connector. More information: BI Java SDK. You can find more info about configuring the BI Java Connectors for this scenario under Configuring BI Java Connector.
For information on how to create and configure systems in the portal for use in BEx Web and Visual Composer scenarios, see Running the System Landscape Wizard and Editing Systems in the NetWeaver Portal System Landscape documentation.
Note: For Universal Data Connect (UD Connect) only: If you enter the name of the resource adapter in the duplication process, you have to add the prefix SDK_ to the JNDI name. Only use uppercase letters in the name to ensure that UD connect can recognize the connector.
When doing the implementation of SAP system, the first thing to be done is to configure the BI system, according to the needs of their customers. Standard business contents are designed for this purpose. Whenever going through an implementation we need to know about what to configure and how to do it and its better that we can figure it out at the earlier stages itself, So that we can deliver according to the user requirements.
This column contains all those objects that are either being transferred for the first time and there is no active version of these object in the system or those objects that have been redelivered in a new version and Content timestamp is used to identify these objects in corresponding tables.
For example: Consider a possibility where one has already activated a Info Object from BI content and made some changes in it. While trying to activate DSO in which this has been used, if Match (X) function is not present system will activate that object on his behalf and it will loose the changes the person made to it.
SAP Demo contents are addition Contents that are delivered with SAP BI. These are serves as Templates and helps in presenting demos to your business users, so that they can get an idea of toolsets in BI. Their main functionality is to demonstrate the functionality of BI.
SAP Demo content consists of info cubes, queries, and range of info objects in scenarios which are provided with the transactional data and master data. A document is provided with each scenario which describes the scenario and its features.
This document covers all the necessary details regarding Standard Business Content and steps to be followedfor accessing and activation of the same. More information will be added as per the knowledge gained.
c80f0f1006