This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.

Thursday 29 January 2009

Informatica interview questions (Part 2)

We will continue with second set of questions

Q 1. What is the difference between Connected and Unconnected Lookups?

Ans : Connected Lookup

1.Connected Lookup Receives input values directly from the pipeline.

2.Connected Lookup You can use a dynamic or static cache.

3.Connected Lookup Cache includes all lookup columns used in the mapping

4.Connected Lookup Can return multiple columns from the same row or insert into the dynamic lookup cache.

5.Connected Lookup If there is no match for the lookup condition, the Informatica Server returns the default value for all output ports. If you configure dynamic caching, the Informatica Server inserts rows into the cache.

6.Connected Lookup Pass multiple output values to another transformation. Link lookup/output ports to another transformation.

7.Connected Lookup Supports user-defined default values

Unconnected Lookup

1.Unconnected Lookup Receives input values from the result of a :LKP expression in another transformation..

2.Unconnected Lookup You can use a static cache.

3.Unconnected Lookup Cache includes all lookup/output ports in the lookup condition and the lookup/return port.

4.Unconnected Lookup The dynamic lookup cache, Designate one return port (R). Returns one column from each row

5.Unconnected Lookup If there is no match for the lookup condition, the Informatica Server returns NULL.

6.Unconnected Lookup Pass one output value to another transformation. The lookup/output/return port passes the value to the transformation calling: LKP expression.

7.Unconnected Lookup Does not support user-defined default values

Q 2. Difference between Active and passive transformation?

Ans: Active transformation :Active transformation can change the no of records passing through it.Active transformations that might change the no of record are advance ext procedure, aggregator, filter, joiner, normalizer, rank, and update strategy, source qualifier

We can connect only 1 active transformation to the same transformation or target can connect any no of pass transformation. Advanced External Procedure/Aggregator/Application Source Qualifier/Filter/Jo iner/Normalizer/Rank/Router/Update Strategy

Passive transformation :Passive transformation can never change the no of records.

Ex. lookup, expression, external procedure, sequence generator, stored procedure
Expression/External Procedure/Mapplet- Input/Lookup/Sequence generator/XML Source Qualifier/Mapplet - Output

Q 3. What is an indicator file and how it can be used?

Ans: Indicator file is used for Event Based Scheduling when you don’t know when the Source Data is available., A shell command ,script or a batch file creates and send this indicator file to the directory local to the Informatica Server. Server waits for the indicator file to appear before running the session.

Q 4. Is lookup a Active transformation or Passive transformation?

Ans: Passive.

Q 5. What are the objects that you can’t use in a mapplet?

Ans :1. COBOL source definition

2. Joiner transformations

3.Normalizer transformations

4.Non reusable sequence generator transformations.

5.Pre or post session stored procedures

6.Target definitions

7.Power mart 3.5 style Look Up functions

8.XML source definitions

9.IBM MQ source definitions

Q6. What is incremental aggregation?

Ans: When using incremental aggregation, we apply capture d changes in the source to aggregate calculations in a session. If the source changes only incrementally and we can capture changes, we can configure the session to process only those changes. This allows the Informatica Server to update our target incrementally, rather than forcing it to process the entire source and recalculate the same calculations each time you we the session.

Q7. What is tracing level and what are the different types of tracing level?

Tracing level represents the amount of information that informatica server writes in a log file.

Types of tracing level

1.Normal

2.Verbose

3.Verbose init

4.Verbose data

Q8. How you will recover sessions?

Ans : If we stop a session or if an error causes a session to stop, refer to the session and error logs to determine the cause of failure. Correct the errors, and then complete the session. The method we use to complete the session depends on the properties of the mapping, session, and Informatica Server configuration.

Use one of the following methods to complete the session:

1. Run the session again if the Informatica Server has not issued a commit.

2. Truncate the target tables and run the session again if the session is not recoverable.

3. Consider performing recovery if the Informatica Server has issued at least one commit.

Q9. When to use Abort, Decode functions?

Ans: Abort can be used to Abort / stop the session on an error condition. If the primary key column contains NULL, and you need to stop the session from continuing then you may use ABORT function in the default value for the port. It can be used with IIF and DECODE function to Abort the session.

Q10 What is constraint based loading ?

Ans Constraint based loading. the data was loaded into the target table based on the Constraints.i.e if we want to load the EMP&DEPT data, first it loads the data of DEPT then EMP because DEPT is PARENT table EMP is CHILD table. In simple terms, it loads PARENT table first then CHILD table.

My Profile

Tuesday 27 January 2009

Informatica interview questions (Part 1)

Following is the list of some of Informatica interview questions.We will try to come up with more questions in later posts.

Q.1 Flat file is having 10 records as input and I want to push 5 records to the target?
Ans : one variable with the name $$count at mapping variable with a Count aggregation typeand initial value for that $$count=0. Create one expression and use SETCOUNTVARIABLEv_count=SetCountVariable ($$Count) Create one filter after expression and put condition v_count <=5
Q2 What is difference between direct and indirect loading options in sessions?
Ans: Use file type direct when we are loading single file into Target. Use Indirect when we want to load multiple files through single session in the mapping

Q3 When we create a target as flat file and source as oracle. How can we have first rows as column names in flat files.
Ans :We can add a union all clause in the Source Qualifier of the Relational source to add the header values in SELECT clause, so as to populate them as header.

Q 4 How to capture the user information (Username) of the person, who is executing a workflow or session in workflow manager.
Ans : SELECT USER_NAME,WORKFLOW_NAME FROM REP_WFLOW_RUN;

Q 5 How to join two tables, which reside in two different databases in the Source qualifier.
Ans: By using DB links in SQ ,but throughput will be very slow. Joiner will be better approach in this case

Q. 6 What is a code page?
A. A code page contains encoding to specify characters in a set of one or more languages. The code page is selected based on source of the data. For example if source contains Japanese text then the code page should be selected to support Japanese text.
When a code page is chosen, the program or application for which the code page is set, refers to a specific set of data that describes the characters the application recognizes. This influences the way that application stores, receives, and sends character data.

Q 7. Is there any way to handle sequence generation over 2 billion records, As sequence generator can generate max 2 billion records
Ans
Method1
it will be done by unconnected lookup create lookup transformation on target just keep one field(id_field) and remove all remaing fields.create i/p field with same data type as out field(id_field) select return port for id_field in loook up override write a query as select nvl(max(id_field),0) from table and ' lookup condition is id_field>=in_field in exp transformation v_dummy(v)--->it always to be zero v_lkp_return iif(:lkp_target(v_dummy) ---here u can pass v_dummy value or just zero o_id_field(0).v_lkp_return+1 __.____._

Method 2
Onw way would be to use expression trans after sequence generator. Another way could be to use expression instead of sequence genearator. Here you will use variable of agg type of count to create sequences.

Q .8 What is difference between $ and $$ parameters/variables
Ans $ are System defined and $$ are User defined variables

Q 9.How to use data from SAP/Oracle Apps /Mainframe
Ans Through power exchange.

Q 10 How to find all invalid mappings in a folder
Ans we can find the invalid mappings information from the rep_all_mappings view with a query Select mapping_name from rep_all_mappings Where subject_area='Folder_Name' And parent_mapping_is_valid <>1 My Profile

Friday 23 January 2009

Trigger Firing sequence in D2K (Oracle Forms)

Trigger Firing sequence:
This is most important thing to understand in Oracle D2K Forms When you open a form following triggers are executed

First Logon Triggers are fired
1.PRE-LOGON
2.ON-LOGON
3.POST-LOGON

After that Pre Triggers
4. PRE-FORM
5. PRE-BLOCK
6. PRE-TEXT

After that WHEN-NEW Triggers
7. WHEN-NEW-FORM-INSTANCE
8. WHEN-NEW-BLOCK-INSTANCE
9. WHEN-NEW-ITEM-INSTANCE

After that ITEM Level Triggers
After this focus is on the first item of the Block. If you type some data and press the tab key following trigger will fire in sequence
10.KEY-NEXT-ITEM (This trigger is present on the item level).
11.POST-CHANGE (This trigger is present on the item level).
12.WHEN-VALIDATE-ITEM (This trigger is present on the item level).
13.POST-TEXT-ITEM (This trigger is present on the item level).
14.WHEN-NEW-ITEM-INSTANCE (Block Level Trigger).

Now focus will go the next item present on the form.
If there are no further items present on the screen them if we enter data and press tab key then only KEY-NEXT-ITEM is fired. Now suppose we close the form then the item level triggers are fired.

POST TRIGGERS

15.POST-BLOCK
16.POST-FORM

Now the form will be closed. My Profile

Normalizer transformation (Working with VSAM source)

Normalizer transformation :
Normalizer transformation is used with COBOL sources, which are often stored in a denormalized format. The OCCURS statement in a COBOL file nests multiple records of information in a single record. We can use Normalizer transformation, to break out repeated data within a record into separate records. For each new record it creates, the Normalizer transformation generates a unique identifier.
Step 1: Create the Copybook for COBOL source
First Step is to get the copybook from Mainframe Team and convert that Informatica Compliant format
It will look like


Normally Highlighted section is provided by Mainframe team convert it into format required by format by adding line above that code (From identification to fd FNAME) and below that code (starting from working storage division). After changes save the file as .cbl file
Point to be taken care while editing .cbl File
You might get following error

identification division.
program-id. mead.
environment division.
select Error at line 6 : parse error
Things to be taken care of

1.Line Select FNAME should not start before column position 12
2.Other line which have been added above and below should not start before column 9
3.All the line in structure (Highlighted above) should end with Dot.
Once Cobol Source is imported successfully you can drag Normalizer source into mapping
Step 2: Set Workflow Properties Properly for VSAM Source
One you have successfully imported the COBOL copybook then you can create your mapping using VSAM Source. After creating mapping you can create your workflow
Please take care of following properties in session containing VSAM source
In Source Advance File properties set the following options (Highlighted one)

Imp: Always ask for COBOL source file to be in Binary Format, Otherwise you will face lot of problems with COMP-3 Fields
Once you have set these properties you can run your workflow.
COMP3 FIELDS:
COBOL Comp-3 is a binary field type that puts ("packs") two digits into each byte, using a notation called Binary Coded Decimal, or BCD. This halves the storage requirements compared to a character, or COBOL "display", field. Comp-3 is a common data type, even outside of COBOL
Common issues faced while working with Comp-3 Fields :
If you have created your created cobol source definition with comp-3 fields (Packed Data) but actual data in source file is not packed .So Make sure that in both the definition and source file date is in same format
Check whether COMP-3 fields are signed or unsigned My Profile

Tuesday 20 January 2009

All About Oracle Synonym

Synonym : A synonym is an alias for table, stored procedure and other database objects.

Main use of synonym :
1. Hide Complexity : If you have to access a object owned by another database user then
You have to use syntax Owner.Objectname every time to use that object. Instead of using cumbersome syntax every time we can create synonym for that
Create Synonym Syn_Name for Owner.Objectname
Suppose you have to access Table Tab_Y owner by user User_Y .Before creating synonym you have to use
Select * from User_Y.Tab_Y
Now if we create a synonym
Create Synonym Tab_Y for User_Y.Tab_Y
After creating synonym you can use
Select * from Tab_Y
Note : The object does not need to exist at the time of its creation

2 .Hide Owner/Location : In some applications application designers do not want to reveal the owner and location of object to other user. To achieve this they create synonym
Type of synonym : There are primarily two type of synonym
1. Private : Only owner of synonym can use private synonym By default synonym created is private
Create Synonym Tab_Y for User_Y.Tab_Y .
2. Public : Everyone can use public synonym. Syntax for creating public synonym
Create Public Synonym Tab_Y for User_Y.Tab_Y .

Order of precedence for synonym :
There are lot of confusion when both object and synonym exist in database, which one will be used first. Following is order of precedence
1. Local objects will always be accessed first.
2. If a local object does not exist, the object with a private synonym will be accessed.
3. If a private synonym does not exist or the object does not exist, then the public synonym will be used.

Flip Side of using synonym :
1. There are security issues related with public synonym
2. If there are lot of synonym in database then database performance will degrade i.e. suppose user want to query Tab_A in above mentioned example database parser have to go across large set of synonym and have to put each synonym in library cache along with dependency.

Alternative to using synonyms:
Use logon trigger to set ALTER SESSION SET CURRENT_SCHEMA This will make default schema for unqualified object
create or replace trigger trg_at_logon on database after logonbegin
if user in ('X','Y') then execute immediate 'alter session set current_schema=’DFLT’;
end if
end;
now if user X,Y login their default schema for unqualified object will be DFLT
Misc point about synonym :
We cannot use Drop and truncate command with synonym

Removing Synonyms :
We can use following syntax to drop private synonym
Drop synonym syn_name
To drop public synonnynm
Drop Public synonym syn_name
We can also use force command
Drop synonym syn_name force
The force syntax will force Oracle to drop the synonym even if it has dependencies. It is not a good idea to use the force phrase as it can cause invalidation of Oracle objects.

Query to find Synonym use in database :
Suppose we want to object that reference synonynm test_syn
Select owner, name, type
From dba_dependencies
Where referenced_name =
'TEST_SYN'
and referenced_type =
'SYNONYM'

Common Error with Synonym :
One of the most common error encountered with synonynm is
"ORA-00980: synonym translation is no longer valid"
Main causes for this to happen are
1. One has created a synonym on non-existing object by mistake.
2. You dropped an object but did not drop the synonyms which are referencing the object.

My Profile

Friday 16 January 2009

New features of Informatica 8

Enhancments in informatica 8.6 Version :


  1. Target from Transformation :In Infa 8 we can create target from transformation by dragging transformation in Target designer

  2. Pushdown optimization : Uses increased performance by pushing transformation logic to the database by analyzing the transformations and issuing SQL statements to sources and targets. Only processes any transformation logic that it cannot push to the database.

  3. New function in expression editor :New function have been introduced in informatica 8 like reg_extract and reg_match

  4. Repository query available in both versioned and non versioned repositories previously it was available only for versioned repository

  5. UDF (User defined function) similar to macro in excel

  6. FTP :We can have partitioned FTP targets and Indirect FTP filesource(with file list).

  7. Propagating Port Descriptions : In Infa 8 we can edit a port description and propagate the description to other transformations in the mapping.

  8. Environment SQL Enhancements : Environment SQL can still be used to execute an SQL statement at start of connection to the database. We can Use SQL commands that depend upon a transaction being opened during the entire read or write process. For example, the following SQL command modifies how the session handles characters: Alter session set NLS_DATE_FORMAT='DD/MM/YYYY';".

  9. Concurrently write to multiple files in a session with partitioned targets.

  10. Flat File Enhancements :
  • Reduced conversion of data types
  • Delimited file performance has improved
  • Flat file now can have integer and double data types
  • Data can be appended to existing flat files
My Profile

Thursday 15 January 2009

Inline view

Inline view :

An inline view is term given to sub query in FROM clause of query which can be used as table. Inline view effectively is a named sub query

Ex : Select Tab1.col1,Tab1.col.2,Inview.col1,Inview.Col2
From Tab1, (Select statement) Inview
Where Tab1.col1=Inview.col1

SELECT DNAME, ENAME, SAL FROM EMP ,
(SELECT DNAME, DEPTNO FROM DEPT) D
WHERE A.DEPTNO = B.DEPTNO

In the above query (SELECT DNAME, DEPTNO FROM DEPT) D is the inline view.

Inline views are determined at runtime, and in contrast to normal view they are not stored in the data dictionary,

There are lot of instances when we want data from sub query to available in main query, before oracle 7.3 only option was to create view and use that in sub query

Disadvantage of using this is

1.Separate view need to be created which is an overhead
2.Extra time taken in parsing of view

This problem is solved by inline view by using select statement in sub query and using that as table.

Advantage of using inline views:

1. Better query performance
2. Better visibility of code

Practical use of Inline views:

1. Joining Grouped data with Non grouped data
2. Getting data to use in another query

I wll further add in this post tommorow with practical example of inline views. My Profile

Monday 12 January 2009

Star Vs Snowflake Schema

In continuation to my last post we will continue with dimensional modeling in detail

In dimension modeling there are mainly two types of schemas

1.Star Schema
2.Snowflake Schema

Star Schema :
Star schema is simplest data warehouse schema .It is called star schema because ER diagram of this schema looks like star with points originating from center. Center of star schema consists of large fact table and points of star are dimensional table.

Star schema is identified by one or more large fact table at center that contain primary information in data warehouse and lot of small dimensional tables each of which contain information about particular attribute of fact tables.

Advantage of Star Schema :

1.Provide a direct mapping between the business entities and the schema design.
2.Provide highly optimized performance for star queries.
3.It is widely supported by a lot of business intelligence tools.

Disadvantage of Star Schema:
There are some requirement which can not be meet by star schema like relationship between customer and bank account can not represented purely as star schema as relationship between them is many to many.


Snow Flake Schema:
Snowflake is bit more complex than star schema. It is called snow flake schema because diagram of snowflake schema resembles snowflake.

In snowflake schema tables are normalized to remove redundancy. In snowflake dimension tables are broken into multiple dimension tables, for example product table is broken into tables product and sub product.
Snowflake schema is designed for flexible querying across more complex dimensions and relationship. It is suitable for many to many and one to many relationship between dimension levels.

Advantage of Snowflake Schema:
1.It provides greater flexibility in interrelationship between dimension levels and components.
2.No redundancy so it is easier to maintain.

Disadvantage of Snowflake Schema :

1.There are More complex queries and hence difficult to understand
2.More tables more joins so more query execution time.

My Profile

Saturday 10 January 2009

Dimensional Data Modeling

Dimensional Data Modeling :
It is a modeling technique used in data warehousing systems. It is different from ER modeling technique used in OLTP systems. In Dimensional modeling a model of tables is combined together with aim of optimized query performance in Decision Support systems in relational databases.

Before further going into details of dimensional data modeling we should be aware of key concepts in Dimensional Data Modeling

Dimension : It can be considered as category of information. For example geographic dimension. It provides a way to slice and dice the data in data warehouse.

Attribute : Attribute can be considered as characteristic of dimension. For example region is attribute of geographic dimension. Dimension attribute is column in dimension table.

Hierarchy : It represents relationships between attributes of dimension. It defines different level with in dimension, In case of geographic dimension it is like

Continent ->Country –> Region –> State ->City ->Street

Fact Table : It represents the measure of interest, for organization product revenue is measure of interest. The level of granularity is an important factor in designing fact tables. For example it can be revenue by continent or revenue by country. In this example fact table will have three columns Product, Geographical region, Revenue.

Lookup Table: The look up table represents a dimension and constitutes attributes for dimension. Lookup table for product will consist of all products available.

Dimensional Modelling Vs ER Modelling :
In Dimensional Modelling emphasis is on optimising decision support query performance,On other hand ER Model are focussed on
  1. Removing redundancy in data model
  2. Optimse OLTP Performance
  3. Focus is on retrieval of single record

We will continue with dimensional data modelling in next post.

My Profile

Wednesday 7 January 2009

High Water Mark (Oracle)

High-water mark

High-water mark indicates the point up to which table or index has ever contained data.Let me explain .Suppose we create an empty table,the high-water mark would be at the start of the table segment


Now if we insert some data into the table segment, the HWM moves and it will be at position up to which the data is in the segment


If insert more data in the table segment, HWM moves further to point the position up to which the data is in the segment

Now if we delete the data from the table see the new position of HWM

As we can see above on deleting the data, HWM does not move. The drawback of this is that oracle always read the blocks up to high water mark in case of full table scan. You must have noticed that doing a count (*) on empty table, takes time to show you no rows. The reason for taking so much time is setting of HWM at higher position.

Now you must be thinking , how to set the high-water mark at lower position ?
The only method to set the HWM is to truncate a table.

Let us see how truncate set the HWM.


HWM is reset now , after truncating data. Table where is having lots of delete or insert operation going on, probably has High HWM. If the HWM is high, it is always advised to rebuild table segment for performance improvement.

Monday 5 January 2009

All About Oracle Sequences

Oracle sequence
Oracle sequence is a database object that is used to generate unique numbers,It is mainly used for primary key values.
Syntax
Syntax for creating sequence is
CREATE SEQUENCE sequence
INCREMENT BY number
START WITH number
MAXVALUE number
MINVALUE
CYCLE/NOCYCLE
CACHE/NOCACHE
ORDER/NOORDER

Important option need to be considered

CYCLE/NOCYCLE:If we specify cycle option while creating sequence then it will continue to generate values even after reaching it's max value. It will again start with min value. Nocycle means it will not generate value after reaching max value. Default is no cycle.

CACHE/NOCACHE : It specifies how many value oracle pre-assign and keep in memory for fast access. Nocache specifies that no value is pre-assigned. If we don’t use any option then default value is 20.

Flip side of using a sequence with cache is that if some system failure occurs then all cached values, which were not used, will be lost. For example you were using a sequence, which was, having cached value equals to 100 and oracle had used only 30 values then you will lose 70 sequence values.

Things to remember while using sequences

1. In a session sequence.nextval must be referred before using sequence.currval .if you try to use current value before using nextval in session you will get following error ORA-00904: "SEQ_TEST"."CURRENTVAL": invalid identifier.

2. In other case suppose you have insert 20 records with sequence the issue rollback then sequence will not be rolled back. Next time oracle will use 21st value. Oracle has wisely chosen this option so that multiple users can use sequence without any concern of duplicate sequence value.

3. Creating sequences with NOCACHE will have performance impact.

4. We can not use sequence when the UNION operator combines SELECT statements.

Enhancement in 11G for sequences

In 11G we can use variable assignment for sequences like
V_seq_11g := test_SEQ_11G.NEXTVAL;
For doing same in 10G and prior we have to use SELECT TEST_SEQ_10G.NEXTVALINTO v_seq_10g FROM DUAL;

Saturday 3 January 2009

How to transpose data in Informatica

Suppose we are having input data coming as

Firstname1
Ph1
Address1

Firstname2
Ph2
Address2

Firstname3
Ph3
Address3

You want data in output like i.e. you want to convert 9 rows into 3 rows with similar group together

Firstname1 Ph1 Address1
Firstname2 Ph2 Address2
Firstname3 Ph3 Address3

You can create a mapping by following these steps
  1. Create a SQ with one input field Field1

  2. Add a expression transformation after SQ

  3. Link Field1 from SQ to expression Transformation

  4. Add a new op port name grp =substr(FIELD1,length(FIELD1),1)

  5. Add a aggregator transformation after Expression and drag Field1 and Grp field

  6. Have grp as group by fieldAdd three new port OP_FLD1=LAST(FIELD1,SUBSTR(FIELD1,1,1)='f')OP_FLD2=LAST(FIELD1,SUBSTR(FIELD1,1,1)='p')OP_FLD=LAST(FIELD1,SUBSTR(FIELD1,1,1)='a')
  7. Add OP_FLD1,OP_FLD2,OP_FLD3 to target

Friday 2 January 2009

Unix Handy Commands

These are the commands which i use on daily basis in Unix.I found these commands very useful

In point 2 and 6 it is not displaying pipe symbol due to some problem.Please use pipe between commands

  1. Too see a particular line For example if you just want to see 180th line in file sed -n '180p' testfile.txt
  2. To find a particular column in file cat testfile.txt awk -F"," '{print $2}'
  3. To rename file with current date mv test test_`date +%Y-%m-%d
  4. This command will take out all those lines which are having 8 at 17th position grep '^.\{16\}8' testfile.txt >testfile_new.txt
  5. To remove nth line without openning file sed 'nd' file1>file2 to remove multiple lines sed -e 1d -e 5d
  6. To find top 20 files with most spacels -ltrawk -F" " '{print $5 $9}' sort -ntail -20
  7. To find record in first file not in second comm -13 testfile1.txt testfile2.txt
  8. If you are looking from something that is contained in a file but you don't know which directory it is in do the following: find . -name "*" xargs grep -i something This will find all of the files in the directory and below and grep for the string something in those files!
  9. Delete Files Delete all the files starting with name testfile find . -type f -name "testfile*" exec rm -f {} \;
  10. Remove blank space from file sed -e "s/ *//g" testfile.txt >testfile.txt_wo_space

Thursday 1 January 2009

Difference between TRUNCATE and DELETE

This is one of most frequently asked question in Oracle interviews.
I tried to put all the differences i know,you are welcome to put your thoughts and comment on that

Delete

  1. It is a DML statement

  2. Can Rollback

  3. Can delete selective records

  4. It fires database triggers.

  5. It does not requires disabling of referential constraints

  6. Deletes perform normal DML. That is, they take locks on rows, they generate redo (lots of it), and they require segments in the UNDO tablespace. Deletes clear records out of blocks carefully. If a mistake is made a rollback can be issued to restore the records prior to a commit. A delete does not relinquish segment space thus a table in which all records have been deletedretains all of its original blocks.


Truncate



  1. It is a DDL statement
  2. Can’t Rollback
  3. Can’t delete selective records.It will delete all the records in table

  4. Doesn't fire database triggers

  5. It requires disabling of referential constraints

  6. Truncate moves the High Water Mark of the table back to zero.No row-level locks are taken,no redo or rollback is generated.