1. What is ETL?
In data warehousing architecture, ETL is an important component, which manages the data for any business process. ETL stands for Extract, Transform and Load. Extract does the process of reading data from a database. Transform does the converting of data into a format that could be appropriate for reporting and analysis. While, load does the process of writing the data into the target database.2. Explain what are the ETL testing operations includes?
ETL testing includes:3. Mention what are the types of data warehouse applications and what is the difference between data mining and data warehousing?
The types of data warehouse applications are:4. What are the various tools used in ETL?
5. What is fact? What are the types of facts?
It is a central component of a multi-dimensional model which contains the measures to be analysed. Facts are related to dimensions. Types of facts are:6. Explain what are Cubes and OLAP Cubes?
Cubes are data managing units comprised of fact tables and dimensions from the data warehouse. It produces multi-dimensional analysis. OLAP stands for Online Analytics Processing, and OLAP cube stores large data in muti-dimensional form for reporting purposes. It consists of facts called as measures categorized by dimensions.7. What is tracing level and what are the types?
8. What is Grain of Fact?
Grain fact is the level at which the fact information is stored. It is also known as Fact Granularity9. What is transformation?
A transformation is a repository object which generates, modifies or passes data. Transformation are of two types Active and Passive10. Explain the use of Lookup Transformation?
The Lookup Transformation is useful for:11. Mention what are the advantage of using DataReader Destination Adapter?
The advantage of using the DataReader Destination Adapter is that it populates an ADO recordset (consist of records and columns) in memory and exposes the data from the DataFlow task by implementing the DataReader interface, so that other application can consume the data.12. In case you have non-OLEDB (Object Linking and Embedding Database) source for the lookup what would you do?
In case if you have non-OLEBD source for the lookup then you have to use Cache to load data and use it as source.13. In what case do you use dynamic cache and static cache in connected and unconnected transformations?
We use dynamic cache and static cache in, Dynamic cache is used when you have to update master table and slowly changing dimensions (SCD) type 1 For flat files Static cache is used14. What is data source view?
A data source view allows to define the relational schema which will be used in the analysis services databases. Rather than directly from data source objects, dimensions and cubes are created from data source views.15. How you can extract SAP data using Informatica?
By the following ways you can extract SAP data using Informatica,16. Explain what staging area is and what is the purpose of a staging area?
Data staging is an area where we hold the data for short-period on data warehouse server. Data staging includes following steps17. What is Bus Schema?
For the various business process to identify the common dimensions, BUS schema is used. It comes with a conformed dimensions along with a standardized definition of information18. What is data purging?
Data purging is a process of deleting data from data warehouse. It deletes junk data's like rows with null values or extra spaces.19. What are Schema Objects?
Schema objects are the logical structure that directly refer to the databases data. Schema objects includes tables, views, sequence synonyms, indexes, clusters, functions packages and database links.20. Explain these terms Session, Worklet, Mapplet and Workflow?
Mapplet : It creates sets of transformation.Worklet: It represents a specific set of tasks given.
Workflow: It's a set of instructions that tell the server how to execute tasks.
Session: it helps the server to shift data from sources to target with set of instruction.