Improve Mongo Import or Export Performance

776 views
Skip to first unread message

Amar

unread,
Sep 12, 2012, 3:39:39 PM9/12/12
to mongod...@googlegroups.com
Hi All,

I am inserting almost 1 crore records through Java Driver in MongoDB. I am building "DBObjects" and I am storing all those DBObjects in an Arraylist. If the ArrayList size has one million records I am just calling insert method and I inserted those records. After that I am just clearing that list. And this process will done upto 1 crore records. Here the problem is for inserting one million records it's taking almost 2.5 mins time. So how can I improve this number using Mongo Java Driver.

My code snippet:

List<DBObject> basicDBList = new ArrayList<DBObject>();
for (int i = 0; i < NO_OF_RECORDS; i++) {
BasicDBObject basicDBObject = new BasicDBObject();
............
                        ............
if (basicDBList.size() == 1000000) {
int j = 0;
this.mongoDocumentOperations.addDocumentsToCollection(
argDatabaseName, argEntity.getName(), basicDBList);
basicDBList = new ArrayList<DBObject>();
System.out
.println(j++
+ " Crore Inserted and the size db list after deletion:"
+ basicDBList.size()+" "+System.currentTimeMillis());
}
} basicDBList.add(basicDBObject);
}

Is it right way of insertion...?????? or is there any other way to do it.

Note: The one more important point is when I am doing this my CPU performance is almost 95% of usage.
Configuration:
i7 processor,
8GB Ram,
Windows7, Dell Studioxps.

Please help me on this how can I improve the performance.

Thanks & Regards,
Amar

Rob Moore

unread,
Sep 12, 2012, 9:44:45 PM9/12/12
to mongod...@googlegroups.com

There are a lot of factors at play here but...

The 2.5 minutes to insert 1 million documents of any real size is not that surprising.  You may get better throughput by adjusting  the size of the batch you are sending to try and hit the 16MB mark the driver is chunking the data into anyway.

If the client doing the insert and the server are on different machines the real problem is that you are waiting for the server every time you do an insert.  That is why I started the Asynchronous Java Driver in the first place.  We are getting to the point of releasing 1.0.0 very soon (next few days) and part of that release will be some benchmark numbers using the YCSB workloads.  I can tell you that for that benchmark we were able to get the load time for 1,000,000 1KB documents down to 87 seconds using the synchronous, wait for an ACK from the server after every single document, insert method of the driver.  It used a lot of threads to fill the pipe to the server.  If the asynchronous interface and a slightly larger batch could been used I'm sure the throughput would have been even higher with a lot fewer threads.

What performance do you need to reach with the batch inserts?  How big are the documents?

For the asynchronous driver to maximize insert performance I'd suggest:
* Do an asynchronous insert (insertAsync) and either don't worry about the reply or collect the result
  after preparing/sending the next batch or in the background.
     ** Note that you can still get acknowledgements from the server using this interface.
* Cut the batch size way down to 10-50 documents to get both the client and server working in parallel.

Mongo m = MongoFactory.create("mongodb://<server>:port/");
MongoDatabase database = m.getDatabase("db");
MongoCollection collection = database.getCollection("collection");

DocumentBuilder builder = BuilderFactory.start();
List<Document> docs = new ArrayList<Document>();
Future<Integer> last = null;

for (int i = 0; i < NO_OF_RECORDS; i++) {
builder.reset();
builder.add(...);
    ............
    ............

    docs.add(builder.build);

if( docs.size() == 10 ) {
// Send this batch.
Future<Integer> current = collection.insertAsync(docs.toArray(new Document[0]);

// Handle the last batch's result.
if (last != null ) {
Integer value = last.get();
// Something?
}
last = current;
}
}

As always your mileage may vary.

Rob.

amar shiva

unread,
Sep 13, 2012, 2:55:49 AM9/13/12
to mongod...@googlegroups.com
Hi Rob,

Thanks you very much for your detailed reply. I am just learner so I am just comparing with the Mongo Shell (Mongo Import) and Mongo Java Driver. When I import through Mongo Shell it tooks 85Sec for one million records but through Mongo Java Driver it tooks 125Sec. So that's why I am asking how can I improve performance through Mongo Java Driver.

Here is my document (Example): My Json String

"{ "_id" : { "$oid" : "504c785136e19bc619febe56" }, "title" : "Mr.", "firstName" : "Alex", "middleName" : "Clavel", "lastName" : "Verne", "suffix" : "Prof.", "phoneNumber" : "+919949257604", "extension" : "PeterGordon", "emailAddress" : "lisa....@gmail.com", "url" : "www.RitaVerne.com", "imId" : "Lisa....@sample.com", "streetAddress" : "P.O. Box 532, 3225 Lacus. Avenue", "city" : "Amsterdam", "state" : "Arkansas", "postalCode" : "56155", "country" : "USA", "passportNumber" : "HenrikGordon", "issuedBy" : "MikeGates", "placeIssued" : "NinaAllison", "dateIssued" : "2008-8-9", "expiryDate" : "1957-8-20", "emigrationCheckRequired" : "false", "driversLicenseNumber" : "AlexScott", "stateIssued" : "MargeSchneider", "name" : "MargeSimpson", "code" : "MikeAllison" }"

So the size of this string is 1KB and when I am inserting I am converting this JSON to DBObject(s). After that I am doing bulk insert.As you said the Driver it self do the chunks and inserts.

Rob correct me
  • As per my understanding If I use asynchronous calls I can get more performance than synchronous calls.?
  • Instead of inserting one million requests at a time we need to make sure the batch size was small (As you said in the mail 10-50).

You said you are getting a load time 87Sec for one million records 1KB size. Are you using synchronous calls or asynchronous calls....??????

Please correct me Rob and guide me in right path.


Thanks & Regards

Amar


--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com
To unsubscribe from this group, send email to
mongodb-user...@googlegroups.com
See also the IRC channel -- freenode.net#mongodb

Stephen Lee

unread,
Sep 13, 2012, 2:38:58 PM9/13/12
to mongod...@googlegroups.com
Hi Amar,

I wrote a Java program to hopefully simulate a simpler version of your code, which doesn't batch your inserts and relies solely on the fire and forget default insert behavior of MongoDB.

import com.mongodb.Mongo;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.BasicDBObject;

import java.util.Date;

public class Test {
public static void main( String[] args ) throws Exception {
Mongo m = new Mongo( "localhost" , 30001 );
DB db = m.getDB( "test" );
DBCollection coll = db.getCollection( "testperf" );
coll.drop();

Date start = new Date();
System.out.println( "Started " + start );
for( int i = 1; i <= 1000000; i++ ) {
coll.insert( new BasicDBObject( "value", ( int )( Math.floor( Math.random() * 1000.0 ) ) )
.append( "title", "Mr." )
.append( "firstName", "Alex" )
.append( "middleName", "Clavel" )
.append( "lastName", "Verne" )
.append( "suffix", "Prof." )
.append( "phoneNumber", "+919949257604" )
.append( "extension", "PeterGordon" )
.append( "emailAddress", "lisa....@gmail.com" )
.append( "url", "www.RitaVerne.com" )
.append( "imId", "Lisa....@sample.com" )
.append( "streetAddress", "P.O. Box 532, 3225 Lacus. Avenue" )
.append( "city", "Amsterdam" )
.append( "state", "Arkansas" )
.append( "postalCode", "56155" )
.append( "country", "USA" )
.append( "passportNumber", "HenrikGordon" )
.append( "issuedBy", "MikeGates" )
.append( "placeIssued", "NinaAllison" )
.append( "dateIssued", "2008-8-9" )
.append( "expiryDate", "1957-8-20" )
.append( "emigrationCheckRequired", "false" )
.append( "driversLicenseNumber", "AlexScott" )
.append( "stateIssued", "MargeSchneider" )
.append( "name", "MargeSimpson" )
.append( "code", "MikeAllison" )
);
if( i > 1 && i % 10000 == 0 ) {
System.out.println( "Wrote out " + i + " documents." );
}
}
m.close();
Date end = new Date();
System.out.println( "Ended " + end );
System.out.println( ( end.getTime() - start.getTime() ) + "ms" );
}
}

On my Core i7 2.3gHz w/ 8GB RAM running MongoDB v2.2 and MongoDB Java Driver v2.9.0, it took 43s.  Can you run it and let me know what performance you experience?

-Stephen

Rob Moore

unread,
Sep 13, 2012, 9:51:02 PM9/13/12
to mongod...@googlegroups.com

Amar,

The key here is to not wait for the server.  As Stephen posted you can use the "fire and forget" mode of the 10gen driver and get decent performance.  The problem is you have no way of knowing if the insert worked in that mode.

I have created a completely different driver that allows you to get better performance and still know what happened on the server but it requires you to work a little harder on the programming side to satisfy both those goals.  For that you need to use my driver's asynchronous interface.  The 10gen supported driver does not have an asynchronous interface.

Below is a similar program to Stephens using the Asynchronous Java Driver.  It has two major differences:
    1) It uses the "Durability.ACK" mode which is comparable to the WriteConcern.SAFE from the 10gen driver instead of WriteConcern.NONE or fire-and-forget mode.
    2) It collects all of the results of the inserts on the server via a list of futures and checks for errors by simply "get()"ing each Future's result.

On my Core i5, 8GB I get 36.8 seconds, about 85% faster than Stephen's run.  <Insert standard micro-benchmark disclaimer>

Wrote out 100000 documents: 4.240610847
Wrote out 200000 documents: 7.453103065
Wrote out 300000 documents: 12.617284371
Wrote out 400000 documents: 15.802694882
Wrote out 500000 documents: 18.965018802
Wrote out 600000 documents: 22.139140997
Wrote out 700000 documents: 25.359382208
Wrote out 800000 documents: 29.545767544
Wrote out 900000 documents: 32.717212151
Wrote out 1000000 documents: 36.706096595
Finished: 36.851899257s

Getting back to your problem:
  1. Cut the batch size down.  Batch inserts are great if everything is done in 1 command but there is a limit to the speed up you can achieve.  More important is to fill the pipeline/socket going to the server and keep the server busy.
  2. To fill the pipeline: Use fire-and-forget or give the Asynchronous Java Driver a try.
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import com.allanbank.mongodb.Durability;
import com.allanbank.mongodb.Mongo;
import com.allanbank.mongodb.MongoCollection;
import com.allanbank.mongodb.MongoDatabase;
import com.allanbank.mongodb.MongoFactory;
import com.allanbank.mongodb.bson.builder.BuilderFactory;
import com.allanbank.mongodb.bson.builder.DocumentBuilder;
import com.allanbank.mongodb.bson.element.ObjectId;

public class Test {
    public static void main(String[] args) throws Exception {
        Mongo m = MongoFactory.create("mongodb://localhost:27017");

        // We want to know what happens.
        m.getConfig().setDefaultDurability(Durability.ACK);

        MongoDatabase db = m.getDatabase("test");
        MongoCollection coll = db.getCollection("testperf");
        coll.drop();

        DocumentBuilder b = BuilderFactory.start();
        long start = System.nanoTime();
        List<Future<Integer>> results = new ArrayList<Future<Integer>>(1000000);

        for (int i = 1; i <= 1000000; i++) {
            Future<Integer> current = coll.insertAsync(b.reset()
                    .add("_id", new ObjectId()).add("title", "Mr.")
                    .add("firstName", "Alex").add("middleName", "Clavel")
                    .add("lastName", "Verne").add("suffix", "Prof.")
                    .add("phoneNumber", "+919949257604")
                    .add("extension", "PeterGordon")
                    .add("emailAddress", "lisa....@gmail.com")
                    .add("url", "www.RitaVerne.com")
                    .add("imId", "Lisa....@sample.com")
                    .add("streetAddress", "P.O. Box 532, 3225 Lacus. Avenue")
                    .add("city", "Amsterdam").add("state", "Arkansas")
                    .add("postalCode", "56155").add("country", "USA")
                    .add("passportNumber", "HenrikGordon")
                    .add("issuedBy", "MikeGates")
                    .add("placeIssued", "NinaAllison")
                    .add("dateIssued", "2008-8-9")
                    .add("expiryDate", "1957-8-20")
                    .add("emigrationCheckRequired", "false")
                    .add("driversLicenseNumber", "AlexScott")
                    .add("stateIssued", "MargeSchneider")
                    .add("name", "MargeSimpson").add("code", "MikeAllison"));

            results.add(current);
            if (i % 100000 == 0) {
                long now = System.nanoTime();
                long delta = now - start;
                double deltaSeconds = ((double) delta) / TimeUnit.SECONDS.toNanos(1);
                System.out.println("Wrote out " + i + " documents: " + deltaSeconds);
            }
        }

        // Check for errors.
        for (Future<Integer> result : results) {
            result.get();
        }

        long end = System.nanoTime();
        System.out.print("Finished: ");

        long delta = end - start;
        double deltaSeconds = ((double) delta) / TimeUnit.SECONDS.toNanos(1);

        System.out.println(deltaSeconds + "s");

        m.close();
    }
}

 

amar shiva

unread,
Sep 14, 2012, 1:55:34 AM9/14/12
to mongod...@googlegroups.com
Hi Stephen, Robert

Thanks for your valuable replays. I will do both approaches and will inform with in a day .

Once again thanks both of you.

Thanks & Regards,
Amar

330.gif

amar shiva

unread,
Sep 14, 2012, 6:59:17 AM9/14/12
to mongod...@googlegroups.com
Hi Stephen and Robert,

I just copy your code in a different classes and I ran the code for inserting one million records.

Stephen's code took almost  24150ms almost 24 Sec. I ran the program 3 to 4 times, every time I got a results in between  24 Sec to 29 Sec.

Robert's code (Asynchronous Java Driver) took almost  33.488101091s. I ran 3 to 4 times at the first time it took 63 Sec and after that again I ran the same code at that time it reduces half of the time. It took 32Sec second time after that it took 32Sec to 36Sec.

Robert why it's taking a long time at the very first time..??

I am using the 8GB Ram, core i7, 2.20GHz processor. 

Initially I used  8GB Ram, core i7, 1.60GHz processor so that's is the one reason it took 127Sec for one million.

  • What is the best way to solve my problem. I mean which one I need to use (Asynchronous Java Driver or Normal Mongo Java Driver) ..??
Can you please suggest what are the pros and cons in these two things.

Thanks & Regards,
Amar
330.gif

Rob Moore

unread,
Sep 14, 2012, 5:25:06 PM9/14/12
to mongod...@googlegroups.com


On Friday, September 14, 2012 6:59:29 AM UTC-4, Amar wrote:
Hi Stephen and Robert,

I just copy your code in a different classes and I ran the code for inserting one million records.

Stephen's code took almost  24150ms almost 24 Sec. I ran the program 3 to 4 times, every time I got a results in between  24 Sec to 29 Sec.

Robert's code (Asynchronous Java Driver) took almost  33.488101091s. I ran 3 to 4 times at the first time it took 63 Sec and after that again I ran the same code at that time it reduces half of the time. It took 32Sec second time after that it took 32Sec to 36Sec.

Robert why it's taking a long time at the very first time..??


Do you have the output from the first and subsequent runs?  I suspect the first time was slower because the MongoDB had to allocate disk space and could not keep up with the insert rate.  On the 2-N runs the data was already allocated and you were running in memory. If this is the case you will see a non-uniform times between the periodic output and then potentially another large pause between the "Wrote out 1000000 documents:" and Finished time.

Remember the Asynchronous Java Driver get a positive acknowledgement from the server for each insert.  Stephen's doesn't so it does not need to wait for the inserts to actually finish on the server.

 
I am using the 8GB Ram, core i7, 2.20GHz processor. 

Initially I used  8GB Ram, core i7, 1.60GHz processor so that's is the one reason it took 127Sec for one million.

  • What is the best way to solve my problem. I mean which one I need to use (Asynchronous Java Driver or Normal Mongo Java Driver) ..??

That depends on what you are trying to accomplish and what is the most important factors for your use case.  Can you fill us in on a little more of what you are trying to do?  Is it a 1 time load of data? Batch loads every day, week?  Once the data is loaded then what?

Rob.
 

amar shiva

unread,
Sep 15, 2012, 1:33:53 PM9/15/12
to mongod...@googlegroups.com
Hi Robert,

I am just learning MongoDB now. After one month my actual project will start. Meanwhile I want grip on MongoDB. 

Rob I am sending the logs (First Time). One important point is I am using 8GB Ram, core i7, 1.60GHz processor.

Wrote out 100000 documents: 13.442118162
Wrote out 200000 documents: 25.509421253
Wrote out 300000 documents: 37.592505983
Wrote out 400000 documents: 49.746991423
Wrote out 500000 documents: 66.362143696
Wrote out 600000 documents: 78.577351785
Wrote out 700000 documents: 90.876182228
Wrote out 800000 documents: 103.647002329
Wrote out 900000 documents: 115.973114307
Wrote out 1000000 documents: 132.756066876
Finished: 132.961310956s

Second Time:
Wrote out 100000 documents: 13.557845911
Wrote out 200000 documents: 26.03419751
Wrote out 300000 documents: 38.524016655
Wrote out 400000 documents: 54.059504975
Wrote out 500000 documents: 66.717266201
Wrote out 600000 documents: 79.394066889
Wrote out 700000 documents: 91.864387859
Wrote out 800000 documents: 108.829111252
Wrote out 900000 documents: 121.333230043
Wrote out 1000000 documents: 133.940320522
Finished: 134.166261845s

When I use Async calls I got the same result. Yesterday I ran this code with different configuration (8GB Ram, core i7, 2.20GHz processor) at present I don't have that PC.
  • Rob I am running MongoDB and as well as Java code from same machine. Here, I have one doubt, If I run MongoDB and Java in different machines (MongoDB in one server and Java code in another server) will there any changes in insertion timings..??????
  • Another Question is How can I access MongoDB using rest server..??????
Can you please help me on these two things. I stuck over here.

Thanks Robert.

Thanks & Regards,
Amar 


 

--
Message has been deleted

Rob Moore

unread,
Sep 16, 2012, 2:14:39 PM9/16/12
to mongod...@googlegroups.com
Amar,

Sorry for the slow reply.

Looking at the logs from the runs I can see a a single disk allocation reflected in the timing (e.g., between 400K and 500K of the first run there is a 16 second delta and the others are 12-13.)

I just pushed out the 1.0.0 version of the driver.  It had a lot of effort put into performance tuning (and was the version I used when I ran the test app).  Mind seeing if that improves things?  The speed of the processor will obviously have some effect but the numbers you are getting are a little slower than I would have expected.

As for co-location of the MongoDB Server and client.  Generally for a production environment I'd suggest running MongoDB on a dedicated machine.  The server will consume all of the resources on the box and unless you are careful you can quickly overload the machine and end up killing performance.  For testing its fine and should not cause a significant performance difference either way.

One of the issues the asynchronous driver is trying to resolve is the latency of the requests to the server.  The MongoDB server is fast once it gets the request but if the server is not co-located (not on the same LAN) with the client then round trip times start to impact the performance of the client quickly.  For some applications even local LAN latencies can be a problem (but that is only in extreme cases demanding absolute highest possible performance).

For the RESTful interface I don't know of any drivers that support it as a native interface.  There are a number of REST servers listed on this page: http://www.mongodb.org/display/DOCS/Http+Interface

I hope this helps.

Rob.

amar shiva

unread,
Sep 16, 2012, 2:45:07 PM9/16/12
to mongod...@googlegroups.com
Hi Robert,

Thanks for your reply. Two days back you send a sample program using AsyncJavaDriver. When I ran that program the time taken for one million records insertion was 125-130 Sec.
This is what you send...
Here you are inserting one at a time. And my scenario is I am just generating the data randomly. I have different entities and each entity has different fields. So here I am generating the value for each field randomly(I attached data generator file). So after generating data for each field I used your code for inserting that data but it took 353Sec for inserting one million records. Difference of our two programs is just you hard code the values and each time you are inserting the same values but I am inserting randomly generated data. So my assumption for the slow operation was Math.random()(which I used in my code) is it right Robert....?

Is there any method in asynchronous driver which inserts the data in a batch......??????

Thanks & Regards,
Amar
SampleDataGenerator.java

Robert Moore

unread,
Sep 16, 2012, 3:01:20 PM9/16/12
to mongod...@googlegroups.com
Amar,

Random is usually not a bottle neck in applications.  I have seen application exhaust the random pool of the machine and then hang waiting for more but that requires the use of SecureRandom either directly or indirectly (UUID being the main indirect user I run into).

Looking at the data generator I don't see where you clear the MultiMap keyValuePairMap.  Since it is static I'm pretty sure it is collecting all 1 Million records.  Can you verify the number of  documents for each batch is 100 and not 100, 200, 300, etc?  If it is 100, 200, 300, like I suspect then clear the map before the for loop in the createWithTestData() method and re-run the test. e.g.,

...
keyValuePairMap.clear();
for (int i = 0; i < 100; i++) {
...

There is a "batch" interface.  The insert methods are actual varargs: insertAsync(DocumentAssignable...docs).  Just add more documents to the method call: insertAsync(doc1, doc2, doc3).

If you are using the async interface I highly doubt you will see a performance improvement from batching inserts.  Batching is to amortize the cost of the request.  For the async interfaces that is already very close to zero.

Rob.

amar shiva

unread,
Sep 16, 2012, 3:33:49 PM9/16/12
to mongod...@googlegroups.com
Hi Robert,

I didn't understand the line Can you verify the number of  documents for each batch is 100 and not 100, 200, 300, etc? You mean number of requests for each batch...?????? Here is the code I am accessing the generated data.

protected DocumentBuilder getDocumentBuilder(String argDatabaseName,
            Entity argEntity) {
        Collection<Field> fields = argEntity.getFields();
        Random r = new Random(100);
        DocumentBuilder documentBuilder = BuilderFactory.start();
        Iterator<Field> fieldsIterator = fields.iterator();
        while (fieldsIterator.hasNext()) {
            Field field = fieldsIterator.next();
            List<String> valueList = null;
            if (this.acceptedDatatypes.contains(field.getName())) {
                valueList = (List<String>) this.multiValueMap.get(field
                        .getName());
            } else if (this.dataTypesList.contains(field.getDataType()
                    .getName())) {
                valueList = (List<String>) this.multiValueMap.get(field
                        .getDataType().getName());
            }
            if (valueList != null && valueList.size() > 0) {
                String value = valueList.get(r.nextInt(valueList.size()));
                documentBuilder.add(field.getName(), value);
            }
        }
        return documentBuilder;
    }

I am just getting the Documentbuilder and after that i will insert and again I am calling this method like
for(int i=0; i<1000000;i++)
coll.insertAsync(getDocumentBuilder());
is it right way Robert...??????

In data generator code even though its a static variable If I run the test case 100 times it will initialize 100 times. I mean every time a fresh copy of map. is it any problem in that.

Thanks & Regards,
Amar

Rob Moore

unread,
Sep 16, 2012, 3:46:01 PM9/16/12
to mongod...@googlegroups.com

Its a little confusing because I can't see all of the pertinent part of the program.

The MultiMap Is to hold the set of values to pick from in the getDocumentBuilder(...), right?

What is the type of acceptedDatatypes and dataTypesList?  Are they big? Can they be changed to a Set?

Where is argDatabaseName and argEntity coming from.  The method call and signature don't match...

Have you tried putting the application into a profiler to see where all of the time is being spent?

Rob.

amar shiva

unread,
Sep 16, 2012, 4:00:35 PM9/16/12
to mongod...@googlegroups.com
Hi Robert,

I will send entire test case code. Can you please look at that one and point me into right position... These are the three files I am using for test execution and it also has some other dependencies but I am unable to send those files.

Thanks & Regards,
Amar
MongoCollectionServiceImplTest.java
RandomDateGenerator.java
SampleDataGenerator.java

Rob Moore

unread,
Sep 16, 2012, 7:16:00 PM9/16/12
to mongod...@googlegroups.com

The only thing I see is the search of the types/field names.  Can you change:

    private List<String> acceptedDatatypes;
 
    private List<String> dataTypesList;

...

        this.acceptedDatatypes = Arrays.asList(this.columnNames);
        this.dataTypesList = Arrays.asList(this.dataTypes);

to

    private Set<String> acceptedDatatypes;
 
    private Set<String> dataTypesList;

...
        this.acceptedDatatypes = new java.util.HashSet(Arrays.asList(this.columnNames));
        this.dataTypesList = new java.util.HashSet(Arrays.asList(this.dataTypes));

amar shiva

unread,
Sep 17, 2012, 9:28:49 AM9/17/12
to mongod...@googlegroups.com
Hi Robert,

I will try this and I will update you :)

Thanks & Regards,
Amar
Reply all
Reply to author
Forward
0 new messages