Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Replication/Sychronization and security

0 views
Skip to first unread message

Alpacian

unread,
Jun 13, 2005, 5:10:02 PM6/13/05
to
I have a situation with front end and back end databases that require
security because it contains medical data. Because these databases need to be
used by more than one person it resides on a network with folder level
security and the network has a very slow speed for these users.

My thinking is to use replication.

I would like to create replicas of the back on the users machine (C drive)
that is synchronized to the network file when it opens so they have the
latest information and then synchronizes when they close it to have a central
storage place for all the users. Because of the way our network is configured
I would then like the data on the C drive to be deleted when they are not
using the database to decrease the potential for breach.

I have looked through the posts and was not able to find one that answered
my question.

I have gotten as far as creating the secured replica of the back end and it
does sync but need to get the front end to sync the back end as the user
begins to use the database (using the jrSyncTypeImport option or the blank
database on the users machine will wipe the master-I think)

The code I have tried on the front end (both attached to the autoexec file
and the open_form of the initial menu) is:
Code:
--------------------------------------------------------------------------------

Dim rep as JRO.Replica
Set rep = New JRO.Replica
rep.ActiveConnection = "C:\apps\...mdb" ‘db to be imported into
rep.Synchronize "S:\scproj3\....mdb", jrSyncTypeImport, jrSyncModeDirect
‘db pulled from
set rep = Nothing
--------------------------------------------------------------------------------


Have also tried putting the code into an autoexec on the backend database
also with the same results.

Get the infamous 'talk to the administrator you don't have permission
error'.

David W. Fenton

unread,
Jun 13, 2005, 8:59:47 PM6/13/05
to
"=?Utf-8?B?QWxwYWNpYW4=?=" <Alpa...@discussions.microsoft.com>
wrote in news:078504ED-EA15-494C...@microsoft.com:

> I have a situation with front end and back end databases that
> require security because it contains medical data. Because these
> databases need to be used by more than one person it resides on a
> network with folder level security and the network has a very slow
> speed for these users.
>
> My thinking is to use replication.

No.

Replication is not for solving application performance issues.

Secondly, by replicating the data to all the workstations you've
compromised security -- you've multiplied the number of locations
where data could be compromised.

Figure out why the application is slow and fix that.

Free hint: it's not because of security on the file share where the
back end is stored.

It's most likely a design problem with your application.

> I would like to create replicas of the back on the users machine
> (C drive) that is synchronized to the network file when it opens
> so they have the latest information and then synchronizes when
> they close it to have a central storage place for all the users.
> Because of the way our network is configured I would then like the
> data on the C drive to be deleted when they are not using the
> database to decrease the potential for breach.

No, you can't do that, as this means you are creating dead replicas.
That will eventually lead to corruption of your replica set,
eventually possibly to the point of complete loss of the full
replica set and all its data.

> I have looked through the posts and was not able to find one that
> answered my question.

You have the wrong idea. It's fundamentally wrong in so many ways as
to almost defy refutation.

> I have gotten as far as creating the secured replica of the back
> end and it does sync but need to get the front end to sync the
> back end as the user begins to use the database (using the
> jrSyncTypeImport option or the blank database on the users machine
> will wipe the master-I think)

I think JRO is a waste of time, especially when you're on a LAN and
can use native DAO methods to synch.

One main concern:

How will you handle conflicts? That is, what if one person edits a
record on their workstation and somebody edits the same record on a
different workstation, what happens when those two data files are
synched?

Your problem is application design.

If the application is running slowly over the LAN, then you need to
redesign it to be more efficient. Here are some things to do:

1. never load more records than the user is going to use. That means
not binding forms to tables, but to a SQL string that is set by the
user requesting a record. If it's medical data, you probably have
some form of patient id number and that can be the basis for the
data retrieval. This also enhances security, since it makes it
harder for somebody to copy large blocks of data to the clipboard
and paste into Excel (which is possible when you've loaded large
numbers of records in a form).

2. for dropdown picklists with large numbers of records in the base
lookup table (leaving aside that it's almost always a bad idea to
use such controls for more than a handful of records), don't
populate the rowsource of the dropdown until the user has typed a
few characters, and then filter it by what they've typed. You do
this in the OnChange event of the combo box. You check the length of
what's entered and when there's 2 or 3 characters, you then use that
in a WHERE clause on the SQL string that sets the rowsource for the
combo box.

3. never assign a recordsource or rowsource to a subform or control
that is not onscreen. This means that if you have a form with
multiple tab pages and a subform on each page, you load the subform
only when that tab is selected, using the Tab's OnChange event.

4. for seldom-changing lookup lists, you can copy the data over to
the front end when the user logs on. This means that to get that
data for dropdown lists and the like, the workstation doesn't have
to make but the one request across the LAN. However, this is a very
small performance improvement, as that kind of data is usually
cached after retrieval, anyway.

Last of all, consider the nature of the network. It could be screwed
up. It should be 100BaseT these days -- I don't even know if you can
buy 10BaseT NICs any more. And you shouldn't even think about using
Access about a wireless link, as this will unquestionably lead to
corruption of your back end database with the first dropped
connection during a data edit.

Also, if there is something wrong with the network that's leading to
performance problems, then it's likely to mean the the network is
just not reliable for data transfer, which means you are possibly
exposed to corruption of the data.

Another solution that some would recommend is to switch the back end
to SQL Server, MSDE, MySQL or some such, but if you're having
performance problems because of application design, you'll just be
pushing all those performance bottlenecks onto an overworked
database server. You could see some improvement if the slowdowns are
due to saturation of the network (with a database server less data
is going to be pulled across the wire, though not nearly as much
less as you might think, or as much less as many Access haters will
wrongly tell you), but that's very unlikely to be the cause.

In short, converting the back end to a server database doesn't
usually improve performance massively unless your application is
already designed to be efficient, and if it were, you wouldn't be
experiencing problems in the first place!

But, very definitely, replication is absolutely NOT the answer,
especially in the scenario you've outlined with deleting replicas,
which is an absolutely disastrous action to take.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc

Alpacian

unread,
Jun 14, 2005, 10:16:13 AM6/14/05
to
Thank you for your response. I will look further into optimizing the
application. However, when the application is run on the c: drive it is
almost instantaneous in bringing up the data and changing between screens.
When put on the network it takes several seconds (5-10) for those close to
the server (i.e. in the building) to 20-30 seconds for those from the other
side of the continent to move between screens. In discussion with our IT
people, I am told that our network drives are for data storage (word/excel
etc.-low bandwidth items) and not meant for databases hence the desire to
explore the possiblity of replication as a solution. Putting the database
(back end) on a different server may be another option which I haven't
explored as it requires scripting and numerous hoops.

James

David W. Fenton

unread,
Jun 14, 2005, 4:35:03 PM6/14/05
to
"=?Utf-8?B?QWxwYWNpYW4=?=" <Alpa...@discussions.microsoft.com>
wrote in news:37122136-24AE-4842...@microsoft.com:

> Thank you for your response. I will look further into optimizing
> the application. However, when the application is run on the c:
> drive it is almost instantaneous in bringing up the data and
> changing between screens. When put on the network it takes several
> seconds (5-10) for those close to the server (i.e. in the
> building) to 20-30 seconds for those from the other side of the

> continent to move between screens. . ..

Ah. You didn't mention that you were on a WAN, not just a LAN.

Access cannot under any circumstances be usably or safely run across
anything less than a 10BaseT LAN.

> . . . In discussion with our IT

> people, I am told that our network drives are for data storage
> (word/excel etc.-low bandwidth items) and not meant for databases
> hence the desire to explore the possiblity of replication as a
> solution. Putting the database (back end) on a different server
> may be another option which I haven't explored as it requires
> scripting and numerous hoops.

Replication between two replicas on servers local to the users is a
valid topology. If there is a server close to the people on the
other side of the continent, then that would be a good place for a
local replica.

Then you'd synch the two replicas via indirect syncronization (NEVER
direct over anything but a LAN) on an appropriate schedule.

Archidrb

unread,
Apr 26, 2007, 12:32:04 PM4/26/07
to
I administrate three Access '03 DB's over a network. Performance is slow
less because of the network and more because of how Access talks to itself.
When the front end loads at the user's end and a query is run to select data,
the user's instance sends a message to the networked backend. gathers the
data and returns the result. Well, sure this seems simple enough, SQL does
the same thing. The difference is that Access is kind of dumb at searching
and it talks very slowly to itself. If your query involves any modification
to the data...

A SQL (or other enterprise level) backend will solve 93% of the performance
issues. Access is really designed to be a small all in one package for small
business and home users. It can be enterprise level as a front end, but
really needs a much more robust backend and coding.

There can be other problems with Access and a SQL backend, but those are
covered in other conversations.

David W. Fenton

unread,
Apr 26, 2007, 3:55:25 PM4/26/07
to
Archidrb <Arch...@discussions.microsoft.com> wrote in
news:34DF2FE7-5F9C-4FB8...@microsoft.com:

> I administrate three Access '03 DB's over a network. Performance
> is slow less because of the network and more because of how Access
> talks to itself. When the front end loads at the user's end and a
> query is run to select data, the user's instance sends a message
> to the networked backend. gathers the data and returns the result.

No, that's not even close to how it works. Your workstation opens
the data MDB across the network (first all it loads is the header
data), and when you run a query locally, your local PC reads the
header data and figures out which indexes to use, then requests
those data pages from the file system. Once it's retrieved the
indexes, your local workstation then requests the appropriate data
pages from the file system. Nothing happens at all on the back end
side of the transaction except serving files.

> Well, sure this seems simple enough, SQL does
> the same thing.

No, it doesn't. A server-based database has a server process running
on the back end server that mediates between the requests of the
workstation and the actual database.

> The difference is that Access is kind of dumb at searching
> and it talks very slowly to itself. If your query involves any
> modification to the data...

I have not found Access queries to be slow against Jet data.

> A SQL (or other enterprise level) backend will solve 93% of the
> performance issues. Access is really designed to be a small all
> in one package for small business and home users. It can be
> enterprise level as a front end, but really needs a much more
> robust backend and coding.
>
> There can be other problems with Access and a SQL backend, but
> those are covered in other conversations.

You really don't know what you are talking about.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/

jankowalski

unread,
May 25, 2007, 2:44:42 AM5/25/07
to

--
msnews.microsoft.com
Użytkownik "Archidrb" <Arch...@discussions.microsoft.com> napisał w
wiadomości news:34DF2FE7-5F9C-4FB8...@microsoft.com...

0 new messages