session flush inserts performing VERY slow for MS Sql Server

945 views
Skip to first unread message

jbrow...@clearcosthealth.com

unread,
Mar 7, 2016, 7:12:10 PM3/7/16
to sqlalchemy
I've run sqlalchemy on MySQL dbs and never had a problem with performance, but when I started writing an app that connected to Microsoft SQL Server 2014 via pyodbc I ran into a serious performance problem. 
I'm running the app on my laptop and the database server IS in a VPN but my internet is fast enough and the vpn connection is reportedly fast.

I decided to run the performance script via the http://docs.sqlalchemy.org/en/rel_0_8/faq.html (copied below for convenience)
The FAQ reports being able to insert **100,000 rows in ~14seconds** when I run the script below I am inserting **1000 rows in ~100 seconds**

Is there some additional profiling that I can do?
Is there something wrong with my connection set up?

OUTPUT:
before init
engine: 0.0750000476837
session remove: 0.0759999752045
session configure: 0.0770001411438
drop all: 1.02800011635
create all: 1.28400015831
init complete 1.28600001335
flush complete 0.192000150681 // total rows added: [0]
flush complete 115.700000048 // total rows added: [1000]
flush complete 220.65199995 // total rows added: [2000]


CODE
import time

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String,  create_engine
from sqlalchemy.orm import scoped_session, sessionmaker

import urllib

Base = declarative_base()
DBSession = scoped_session(sessionmaker())
engine = None

server = "xx.xxx.xx.xx,xxxxx"
database = "test"

raw_connection_string = "DRIVER={SQL Server Native Client 11.0};Server=%s;Database=%s;TDS_Version=8.0;Trusted_Connection=yes;" % (server, database)
quoted = urllib.quote_plus(raw_connection_string)
connection_string = "mssql+pyodbc:///?odbc_connect={}".format(quoted)

ECHO = None # "debug"


class Customer(Base):
    __tablename__ = "customer"
    id = Column(Integer, primary_key=True)
    name = Column(String(255))

def init_sqlalchemy(dbname='jason'):
    global engine
    t0 = time.time()
    engine = create_engine(connection_string, echo=ECHO)
    print 'engine: %s' % str(time.time() - t0)
    DBSession.remove()
    print 'session remove: %s' % str(time.time() - t0)
    DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
    print 'session configure: %s' %  str(time.time() - t0)
    Base.metadata.drop_all(engine)
    print 'drop all: %s' %  str(time.time() - t0)
    Base.metadata.create_all(engine)
    print 'create all: %s' %  str(time.time() - t0)

def test_sqlalchemy_orm(n=100000):
    t0 = time.time()
    print 'before init'
    init_sqlalchemy()
    print 'init complete %s' % str(time.time() - t0)

    t0 = time.time()
    for i in range(n):
        
        customer = Customer()
        customer.name = 'NAME ' + str(i)
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
            print 'flush complete %s // total rows added: [%s]' % (str(time.time() - t0), i)
    DBSession.commit()
    print("SQLAlchemy ORM: Total time for " + str(n) +
                " records " + str(time.time() - t0) + " secs")


if __name__ == '__main__':
    test_sqlalchemy_orm(100000)




Mike Bayer

unread,
Mar 7, 2016, 8:24:41 PM3/7/16
to sqlal...@googlegroups.com
What will make inserts slow is network as well as indexes.   If you aren't comparing your script to some other backend then there's probably a whole lot of other things that can get in the way like unicode encoding, large columns or other surprises in the model.   If you run the full profile version from the FAQ (please refer to the 1.0 documentation... Also you're on sqla 1.0 right?) I can identify where the time is spent with a full Python profile.  
--
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com.
To post to this group, send email to sqlal...@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Jason Brownstein

unread,
Mar 7, 2016, 9:19:47 PM3/7/16
to sqlal...@googlegroups.com
Thanks Mike - will the full profile ASAP.
You received this message because you are subscribed to a topic in the Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy/2Safb9uhgjg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sqlalchemy+...@googlegroups.com.

jbrow...@clearcosthealth.com

unread,
Mar 8, 2016, 11:39:33 AM3/8/16
to sqlalchemy
I'm on sqla 1.0.12 / MS SQL Server 2014 / python 64bit 2.7 (sorry should have included that in the first post)
Here is an extract from the profile using the code from http://docs.sqlalchemy.org/en/rel_0_8/faq.html#code-profiling
I am reading explanations in the FAQ now but any guidance would be appreciated. Also let me know if you need to see any more of the output - it is ~900 lines long

before init
engine: 0.0989999771118
session remove: 0.100000143051
session configure: 0.101000070572
drop all: 1.21700000763
create all: 1.39599990845
init complete 1.39800000191
flush complete 94.3480000496 // total rows added: [1000]
SQLAlchemy ORM: Total time for 1000 records 94.388999939 secs
         168583 function calls (168092 primitive calls) in 95.790 seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.015    0.015   95.790   95.790 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\sqlbenchmark.py:60(test_sqlalchemy_orm)
     1003    0.008    0.000   94.251    0.094 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\scoping.py:156(do)
        3    0.003    0.001   94.059   31.353 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\session.py:1985(flush)
        1    0.012    0.012   94.051   94.051 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\session.py:2036(_flush)
        1    0.000    0.000   93.893   93.893 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\unitofwork.py:351(execute)
        1    0.000    0.000   93.890   93.890 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\unitofwork.py:528(execute)
        1    0.009    0.009   93.890   93.890 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\persistence.py:124(save_obj)
     1006    0.012    0.000   93.873    0.093 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:846(execute)
        1    0.080    0.080   93.761   93.761 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\orm\persistence.py:736(_emit_insert_statements)
     1006    0.109    0.000   93.743    0.093 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:1061(_execute_context)
     1004    0.008    0.000   93.689    0.093 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\sql\elements.py:322(_execute_on_connection)
     1004    0.051    0.000   93.680    0.093 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:975(_execute_clauseelement)
     1002    0.033    0.000   46.720    0.047 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\default.py:810(_setup_crud_result_proxy)
     1009   46.673    0.046   46.673    0.046 {method 'execute' of 'pyodbc.Cursor' objects}
     1008    0.021    0.000   46.652    0.046 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\default.py:449(do_execute)
     1006    0.019    0.000   46.636    0.046 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\result.py:637(_soft_close)
     1006    0.027    0.000   46.617    0.046 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:1220(_safe_close_cursor)
     1009   46.590    0.046   46.590    0.046 {method 'close' of 'pyodbc.Cursor' objects}
     1001    0.010    0.000   46.574    0.047 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\result.py:1180(_soft_close)
        1    0.000    0.000    1.397    1.397 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\sqlbenchmark.py:45(init_sqlalchemy)
        2    0.000    0.000    1.294    0.647 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:1853(_run_visitor)
        1    0.000    0.000    1.115    1.115 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\sql\schema.py:3697(drop_all)
        4    0.000    0.000    0.934    0.234 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:1845(_optional_conn_ctx_manager)
        3    0.000    0.000    0.934    0.311 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:2020(contextual_connect)
        2    0.000    0.000    0.934    0.467 C:\Python27\Lib\contextlib.py:15(__enter__)
        3    0.000    0.000    0.930    0.310 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:2071(_wrap_pool_connect)
        3    0.000    0.000    0.930    0.310 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:367(connect)
        3    0.000    0.000    0.930    0.310 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:710(_checkout)
        3    0.000    0.000    0.930    0.310 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:478(checkout)
        3    0.000    0.000    0.930    0.310 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:1039(_do_get)
        1    0.000    0.000    0.929    0.929 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:320(_create_connection)
        1    0.000    0.000    0.929    0.929 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:447(__init__)
        1    0.000    0.000    0.641    0.641 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\pool.py:604(__connect)
        1    0.000    0.000    0.641    0.641 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\strategies.py:90(connect)
        1    0.000    0.000    0.641    0.641 C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\default.py:384(connect)
        1    0.641    0.641    0.641    0.641 {pyodbc.connect}


On Monday, March 7, 2016 at 6:19:47 PM UTC-8, Jason Brownstein wrote:
Thanks Mike - will the full profile ASAP.

On Monday, March 7, 2016, Mike Bayer <cla...@zzzcomputing.com> wrote:
What will make inserts slow is network as well as indexes.   If you aren't comparing your script to some other backend then there's probably a whole lot of other things that can get in the way like unicode encoding, large columns or other surprises in the model.   If you run the full profile version from the FAQ (please refer to the 1.0 documentation... Also you're on sqla 1.0 right?) I can identify where the time is spent with a full Python profile.  
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscribe@googlegroups.com.

To post to this group, send email to sqlal...@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy/2Safb9uhgjg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sqlalchemy+unsubscribe@googlegroups.com.

Mike Bayer

unread,
Mar 8, 2016, 7:33:17 PM3/8/16
to sqlal...@googlegroups.com
so the profile shows pretty clearly that the time is spent on the DB
side. 1006 SQL statements are clocking at .09 seconds each (!) to take
a total of 93 seconds:

1006 0.012 0.000 93.873 0.093
C:\Users\jbrownstein\Documents\GitHub\ETL\ETL\env\lib\site-packages\sqlalchemy\engine\base.py:846(execute)

we can see the 93 seconds is mostly spent in two pyodbc methods:

1009 46.673 0.046 46.673 0.046 {method 'execute' of
'pyodbc.Cursor' objects}
1009 46.590 0.046 46.590 0.046 {method 'close' of
'pyodbc.Cursor' objects}


for contrast, here I'll run the example suite that comes with
SQLAlchemy. For this test, we can run this:

python -m examples.performance.bulk_inserts --num 1000 --dump --test
test_flush_no_pk --dburl postgresql://scott:tiger@localhost/test

in this test, I'm running against a database on the local machine, so
network overhead is extremely low. The time spent within the
"execute()" method of the psycopg2 cursor:

1000 0.359 0.000 0.364 0.000 {method 'execute' of
'psycopg2.extensions.cursor' objects}

So thats 1000 calls which is taking .3 seconds total. Other psycopg2
methods hardly show up at all in the profile.

Next, I'll run the identical test against a SQL Server instance that I'm
running on a QEMU VM, on the same machine. I'll use pyodbc with
freetds. This test will run all the particular SQLisms that your test
does (including getting the last inserted primary key):

1000 1.697 0.002 1.697 0.002 {method 'execute' of
'pyodbc.Cursor' objects}

so that is 1.6 seconds for 1000 calls compared to your 45.

the close() method, which on your system is somehow taking 46 seconds
for 1000 calls, takes 5/1000ths of a second for 1000 calls locally:

1000 0.005 0.000 0.005 0.000 {method 'close' of
'pyodbc.Cursor' objects}


so it looks like you are definitely having some extreme network overhead
and/or SQL Server latency issues. this is outside of SQLAlchemy.
> <mailto:cla...@zzzcomputing.com>> wrote:
>
> What will make inserts slow is network as well as indexes. If
> you aren't comparing your script to some other backend then
> there's probably a whole lot of other things that can get in the
> way like unicode encoding, large columns or other surprises in
> the model. If you run the full profile version from the FAQ
> (please refer to the 1.0 documentation... Also you're on sqla
> 1.0 right?) I can identify where the time is spent with a full
> Python profile.
>
> On Mar 7, 2016, at 7:12 PM, jbrow...@clearcosthealth.com wrote:
>
>> I've run sqlalchemy on MySQL dbs and never had a problem with
>> performance, but when I started writing an app that connected
>> to Microsoft SQL Server 2014 via pyodbc I ran into a serious
>> performance problem.
>> I'm running the app on my laptop and the database server IS in
>> a VPN but my internet is fast enough and the vpn connection is
>> reportedly fast.
>>
>> I decided to run the performance script via the
>> http://docs.sqlalchemy.org/en/rel_0_8/faq.html
>> <http://docs.sqlalchemy.org/en/rel_0_8/faq.html> (copied below
>> for convenience)
>> The FAQ reports being able to insert ***100,000 rows in
>> ~14seconds** *when I run the script below I am inserting
>> ***1000 rows in ~100 seconds***
>>
>> Is there some additional profiling that I can do?
>> Is there something wrong with my connection set up?
>>
>> *OUTPUT:*
>> customer.name <http://customer.name> = 'NAME ' + str(i)
>> DBSession.add(customer)
>> if i % 1000 == 0:
>> DBSession.flush()
>> print 'flush complete %s // total rows added:
>> [%s]' % (str(time.time() - t0), i)
>> DBSession.commit()
>> print("SQLAlchemy ORM: Total time for " + str(n) +
>> " records " + str(time.time() - t0) + " secs")
>>
>>
>> if __name__ == '__main__':
>> test_sqlalchemy_orm(100000)
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the
>> Google Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from
>> it, send an email to sqlalchemy+...@googlegroups.com.
>> To post to this group, send email to sqlal...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy
>> <https://groups.google.com/group/sqlalchemy>.
>> For more options, visit https://groups.google.com/d/optout
>> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to a topic
> in the Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/sqlalchemy/2Safb9uhgjg/unsubscribe
> <https://groups.google.com/d/topic/sqlalchemy/2Safb9uhgjg/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email
> to sqlalchemy+...@googlegroups.com.
> To post to this group, send email to sqlal...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy
> <https://groups.google.com/group/sqlalchemy>.
> For more options, visit https://groups.google.com/d/optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+...@googlegroups.com
> <mailto:sqlalchemy+...@googlegroups.com>.
> To post to this group, send email to sqlal...@googlegroups.com
> <mailto:sqlal...@googlegroups.com>.

jbrow...@clearcosthealth.com

unread,
Mar 9, 2016, 11:56:58 AM3/9/16
to sqlalchemy
This is all extremely helpful.

After some hair pulling I was able to get the local sql server express running and I ran the same test on a local instance.

     1010    0.126    0.000    0.126    0.000 {method 'execute' of 'pyodbc.Cursor' objects}
     1010    0.060    0.000    0.060    0.000 {method 'close' of 'pyodbc.Cursor' objects}

Results seem to be as you expected and I am assuming this means the bottle neck is running the connection over the vpn.
I am not terribly network/sys admin savvy but I am going to do the best to troubleshoot the network connection the best I can. I'm a little concerned about what I will find though because I can run the same inserts in sql server management studio without any network slowness. 
For instance I can run the following over the network through ssms in 1 second:
DECLARE @x INT = 0
WHILE
@x < 1000
BEGIN
 SET
@x += 1
 INSERT INTO dbo
.customer(name) VALUES  ('NAME ' + CAST(@x AS VARCHAR(100)))
END

I know that ORM's in general aren't supposed to be as fast as straight sql but does ssms performance contradict the network slowness?
Also the vpn connection itself is reporting a fast connection.

So best steps for debugging the slow connection? 
-I'm trying to be able to run this in the vpn network itself but for the moment that is not possible for reasons outside my control
-Talk to the hosting company? 
-Look up SQL Server connection settings
-Test connection settings? 
-ping the server?

Thanks again for walking me through the profile.

Mike Bayer

unread,
Mar 9, 2016, 2:57:53 PM3/9/16
to sqlal...@googlegroups.com


On 03/09/2016 11:56 AM, jbrow...@clearcosthealth.com wrote:
> This is all extremely helpful.
>
> After some hair pulling I was able to get the local sql server express
> running and I ran the same test on a local instance.
>
> 1010 0.126 0.000 0.126 0.000 {method 'execute' of
> 'pyodbc.Cursor' objects}
> 1010 0.060 0.000 0.060 0.000 {method 'close' of
> 'pyodbc.Cursor' objects}
>
> Results seem to be as you expected and I am assuming this means the
> bottle neck is running the connection over the vpn.
> I am not terribly network/sys admin savvy but I am going to do the best
> to troubleshoot the network connection the best I can. I'm a little
> concerned about what I will find though because I can run the same
> inserts in sql server management studio without any network slowness.
> For instance I can run the following over the network through ssms in 1
> second:
> |
> DECLARE @xINT =0
> WHILE @x<1000
> BEGIN
> SET @x+=1
> INSERT INTO dbo.customer(name)VALUES ('NAME '+CAST(@xAS VARCHAR(100)))
> END
> |

ok, but let's look at what we have here. This is *one* SQL statement,
about 200 characters plus a little bit of conversational back and forth
to make it happen, emitted over the network. That your database can
insert 1000 rows quickly is not part of the problem here. It's that
sending 1000 individual INSERT statements over the network, each with
that same conversational back/forth for every statement, is too slow.
Even here, if management studio shows you the time to complete the
statement in microseconds you might see it taking .09 seconds rather
than .002; not noticeable to humans but a big deal when you multiply.



>
> I know that ORM's in general aren't supposed to be as fast as straight
> sql but does ssms performance contradict the network slowness?
> Also the vpn connection itself is reporting a fast connection.
>
> So best steps for debugging the slow connection?
> -I'm trying to be able to run this in the vpn network itself but for the
> moment that is not possible for reasons outside my control
> -Talk to the hosting company?
> -Look up SQL Server connection settings
> -Test connection settings?
> -ping the server?

im not sure what "the vpn reports a fast connection" means, but here you
want to use ping as a first line of inspection for the network speed.

From there, a pretty standard tool is Wireshark
https://www.wireshark.org/ which can show you everything about packets
on the network, it's a big professional (but still free and open) tool
and they have windows versions. there's a lot of tutorials on the web
on how to use it.




>
> Thanks again for walking me through the profile.
>

Jonathan Vanasco

unread,
Mar 9, 2016, 3:02:00 PM3/9/16
to sqlalchemy
Those aren't the same inserts as SqlAlchemy

The SQL you noted above (DECLARE / BEGIN/ END) is one packet of text that is sent over the wire to the database server and executed.  (a few hundred bytes)

A more appropriate comparison would be sending 1000 separate "INSERT INTO ____" statements over the network connection. (nearly 100 k)
But you'd need send each line one-at-a-time, waiting for a response before you can send the next line. 

jbrow...@clearcosthealth.com

unread,
Mar 9, 2016, 3:17:26 PM3/9/16
to sqlalchemy
Like I said I am not terribly network savvy but what I meant was that the particular vpn client that I am using says that my "connection speed" is ~540Mpbs. 
I know comparing sql alchemy to raw sql is kind of a no-no but I was just illustrating that the there isn't obvious evidence of a slow network connection.
Wireshark looks very promising - I appreciate the recommendation. 

For the record - I am a huge sqa fan and have been using it for years on various projects (in past all on MySQL. I'm not trying to poke holes in the performance at all.
I'm just hoping to gain some troubleshooting skills and get it working for this new project.

Thanks 

Jonathan Vanasco

unread,
Mar 9, 2016, 4:05:00 PM3/9/16
to sqlalchemy

On Wednesday, March 9, 2016 at 3:17:26 PM UTC-5, jbrow...@clearcosthealth.com wrote:
Like I said I am not terribly network savvy but what I meant was that the particular vpn client that I am using says that my "connection speed" is ~540Mpbs. 

That's most-likely a speed to your router on the LAN (wireless could do that, which is about half gigabit ethernet speed).  If you're going to a VPN, it's going to be a fraction of that.
 
I know comparing sql alchemy to raw sql is kind of a no-no but I was just illustrating that the there isn't obvious evidence of a slow network connection.

It's not that there's necessarily a slow network connection - it's that there are inherent overheads and slowness involved when you run the database on another machine, and that is compounded when you (physically) locate that machine on another network.    

Among the many things that can be happening, the VPN could just not be optimized for this type of use.

You should try running the script from the machine or network you're dialing into; I'm fairly certain you'll see a noticable speed difference.
Reply all
Reply to author
Forward
0 new messages