I have even written a pretty cool utility to create an Empty Full replica at
the same time that the Design Master is created from an unreplicated
database. It deletes all the data, converts it to a design master, creates a
replica (the empty full replica), and then repopulates the design master
with the original database's data. It does this by reading the database
schema and automatically determining the table/foreign table hierarchy. It
deletes data from the "bottom up" and repopulates it from the "top down".
The EF replica is read-only protected to prevent someone inadvertently
two-way synching with it. A function is provided to clear the read-only
attribute, one-way synch it with the DM and then set the read-only attribute
again. This maybe used to keep the EF up to date with schema changes in the
DM.
The same utility can be used to create a partial replica, read its schema
into an editor for the ReplicaFilter and PartialReplica properties of the
tables and relationships, save the properties back to the partial replica
and then PopulatePartial it. This allows filters to be set on multiple
tables. Its like the Partial Replica Wizard on steroids!
I want a head-office hub with a Replication Manager managed farm of three
full replicas. The DM and original EF will be stored here but not managed.
My head-office users will link their application to one of the farm members,
ensuring that this is not the base replica used to synchronize outside of
the hub.
I have seen it proposed that the users should link to an unmanaged replica
which is manually synched with the farm, but I would rather have this
scheduled by Replication Manager. Is this wise?
Now my problem.
I want a satellite office to have a partial replica farm. I can create the
original partial replica at head office and MoveReplica it across the WAN to
the satellite office. I can MakeReplica a duplicate of the EF replica and
MoveReplica it across the WAN to the satellite office. I could then use this
to create another two partial replicas (initially empty since they are
MakeReplica'd from an EF) and synchronise them with the original partial
replica to populate them. I would then have three partial replicas for a
farm and an EF to create any subsequent replicas without moving data across
the WAN.
However I see no way to manage the partial replicas as a farm. They cannot
be synched with each other so they need at least one full replica to be
managed with them to gain the benefits of a farm.
I could create a "De Facto Partial" replica by creating a full replica from
the local EF and synching it with a partial. Indeed I may as well turn the
local EF into a DFP by two-way synching it with a partial.
However, if I manage the DFP with Replication Manager, it will get used as
the base replica for the farm and will drag all the head-office data over on
the first indirect synch.
So is it possible to manage the satellite office's partial replicas as a
farm?
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
Can you synch between full replica farms, and then have a partial
appended to the farm? The users would interact with the partial, but
the hub would contain a farm of full replicas. Dunno if that would
work... just offering a possible alternative.
**********************
jackmacM...@telusTELUS.net
remove uppercase letters for true email
http://www.geocities.com/jacksonmacd/ for info on MS Access security
I could create a hub farm with one or two full replicas and one or two
partials and have the local user's link their application to one of the
partials. However I would be relying on their ignorance to prevent them from
linking to one of the full replicas. This is not secure enough in this
situation. It would be much better to ensure that they only ever receive
their own data on site than to contrive a security system to prevent them
getting at data they should not see.
I do not want to simply make them a leaf node partial because I get the
impression that it is inevitable that it will eventually not want to synch
(hence the concepts of the farm and empty full replica). I am an outside
consultant and they do not have anyone at head office who would be able to
cope with sending an EF to repair the partial. The partial is too large to
keep sending over the wire - it is big and we are on dial-up at about
33kbaud.
Any suggestions?
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
"Jack MacDonald" <jackMACm...@telus.net> wrote in message
news:r8sl209bvfmm2s1fi...@4ax.com...
--
MichKa [MS]
NLS Collation/Locale/Keyboard Development
Globalization Infrastructure and Font Technologies
This posting is provided "AS IS" with
no warranties, and confers no rights.
"Neil Sargent" <ne...@sargent.demon.co.uk> wrote in message
news:c0fu1m$q6d$1...@news6.svr.pol.co.uk...
One of the great things about partial replication is that it can achieve
very sophisticated filtering very easily. To design the UI to apply the
filtering would be a major undertaking, especially if it were designed to be
as flexible as partial replicating.
I also believe that the only way to stop someone getting at Jet data that
they shouldn't have is to not put it into their possession in the first
place. No amount of skilled hacking on the local hub can yield data which is
not stored there!
To be honest, the data is not *that* precious - I really don't want to get
into a security discussion here.
Could I use two synchronizers (on different workstations)? One could be
responsible for linking to head office to get one partial replica to
synchronize the required data onto site. The other could run a farm of DFP
replicas which synch with the partial replica "gateway".
I don't know if this is practical. The obvious place for the farm is on the
server, which is is likely to be the one that links to head-office. I may be
able to get another always-on workstation to manage the farm. Do you think
this would work? Or am I better off just treating the satellite office as a
leaf and lose the benefits of a farm.
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
"Michael (michka) Kaplan [MS]" <mic...@online.microsoft.com> wrote in
message news:u2VZkEX8...@TK2MSFTNGP12.phx.gbl...
Note that the hub is a full replica, and the division contains all the
data, but the users can't see it. Does that meet your objectives?
On Thu, 12 Feb 2004 16:56:19 -0000, "Neil Sargent"
I like the idea of using the indirect synchronization to isolate the hub
from the partial replica. I had only considered direct synching on the LAN.
This means that I can lock the users out from the farm using folder
permissions so they can't direct synch through the Access UI. Only the
administrator on the server would need access to the farm's folder.
I will have to get the client to decide if this is secure enough for their
needs and whether they are prepared to tie up two machines with
synchronizers.
Thanks for the tip.
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
"Jack MacDonald" <jackMACm...@telus.net> wrote in message
news:uifo20hka2mq07gl5...@4ax.com...
This would require that the DFP farm never synchs with the full replica farm
at head office. This should be achievable with the appropriate use of
permissions. If the the head office dropbox only provides access to the user
account running the "gateway" synchronizer and not the account running the
second DFP farm synchronizer.
It sounds complicated but it may be worth it if the satellite office is
going to run a moderate number of users.
"Neil Sargent" <ne...@sargent.demon.co.uk> wrote in message
news:c0i4u9$8lh$1...@news8.svr.pol.co.uk...
Another single-computer option (might!!) be to locate your full
replica farm on the remote office computer as discussed. Then locate a
partial replica on a share on the same machine for the users to access
from the LAN. Do not manage it with Replication Manager, but schedule
periodic synch's to it. Only one computer is involved, the users
interact with a partial, and the farm is a full replica. I *think* it
would work, but I would test on a throw-away database first.
I *think* this method would remove the restriction of never synch'ing
with the Head Office replica. Again, this is not based on first-hand
knowledge so testing is vital.
On Fri, 13 Feb 2004 09:34:12 -0000, "Neil Sargent"
One thing I have realised is that with all these proposed topolgies, the
"gateway" must be synched with the head-office and then the user's replica
synched with the gateway. This is true, whether the "gateway" is a full
replica farm (as in your suggestion), or a single partial (as in mine). This
double synching would be fine if the gateway and head-office are ket in
synch by an automated shedule accross an always-on connection. However it is
going to be difficult to manage if the satellite office must invoke a
dial-up connection as part of the synchronization procedure. In other words
the process is going to be:
1) dial-up
2) synch gateway partial / local farm
3) synch DFP farm / user's partial
4) hang-up
4) could precede 3) but would then split the process into two operations.
I am really beginning to favour the idea of the satellite just having one
partial replica. Instead of using a farm to maintain its state of health, I
could automate the "repair can't find synchronizer error" process. This
would be
1) MoveReplica an EF replica from head-office to the satellite office (the
local Dropbox would be an appropriate shared folder which is "known" to the
system)
2) One-way direct synch the EF to the "broken" partial
3) MoveReplica the EF back to head office for safe keeping
This process would have to be invoked and run from the satellite office so
that the direct synch in 2) is a local process.
Is this a sensible process?
Is it worth automating? I do not know just how likely or frequent the error
which farms avoid is likely to occur. I understand it occurs after one or
more failed synchs between the satellite and head office sycnhronizers, but
just how many failures make it "break"?
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
"Jack MacDonald" <jackMACm...@telus.net> wrote in message
news:qsfr20t6bs13pb0v4...@4ax.com...
- the local replica could be synchronized with the gateway
automatically using RM, thus requiring no user intervention
- the gateway must be sync'd with head office via dialup
- which is virtually the same as sync'ing your single partial replica
from the viewpoint of manual user intervention
Therefore the issue boils down to "is it possible to automatically
synch a replica farm over dialup"? My guess is "yes", but I haven't
done it personnally.
Sounds to me like you are comfortable with developing your own
solutions. Have you considered using the TSI Synchronizer control
running on a timed process on your gateway computer? If you could
automatically establish the dialup connection, then you should be able
to invoke an automatic synch at the same time.
On Sat, 14 Feb 2004 14:37:28 -0000, "Neil Sargent"
<ne...@sargent.nospam.demon.co.uk> wrote:
>This would offer a single synchronizer solution, but the satellite office
>still stores a fully populated full replica farm.
True - but it is invisible to the users. For all they know, the only
replica at the satellite office is the partial.
No comment -- I just don't know.
>
>Is it worth automating? I do not know just how likely or frequent the error
>which farms avoid is likely to occur. I understand it occurs after one or
>more failed synchs between the satellite and head office sycnhronizers, but
>just how many failures make it "break"?
AFAIK, - one.
As this discussion was progressing, I was hurtling towards my delivery
deadline with the client. I thought it might be of general interest to show
how I got on.
Please refer to my posting "An account of setting up a replicated system"
Best Regards
Neil
--------------------------------------------------------
Neil Sargent
Smart IT
Email: ne...@sargent.nospam.demon.co.uk
--------------------------------------------------------
"Jack MacDonald" <jackMACm...@telus.net> wrote in message
news:urfs2058a3oo3rklj...@4ax.com...
A couple comments about your article
1. You are off-base to mention me and Michael Kaplan in the same
sentence. He has forgotten more about Access Replication than I will
ever know. We are leagues apart in knowledge.
2. You are trying to force a particular replica to be the "base" in
your farm. That concept does not apply. You cannot predict what
replica will be used for sync, nor should you try. Forget about the
individual replicas in the farm -- just consider it to be a single
entity and let it do its thing. What is the "base" today may not be
base tomorrow.
3. It sounds like you are setting up a two-member farm on your laptop.
Nothing wrong with that, but one of Michael Kaplan's recommendations
is that you never interact directly with a farm member. It took me a
while to understand why: you want to ensure that no human introduces a
data conflict into a member of the replica farm. Ergo, you don't use a
farm member as your "active editing" replica.
4. Nothing in particular wrong with your naming convention, but I
prefer to use something like "MyApplicationBEFarm1.mdb",
"MyApplicationBEFarm2.mdb", etc. This naming convention implies that
no replica is different than any others -- which is true. Your naming
scheme implies that one replica carries special duties, which is
untrue. Also, I have found that replicas occasionally go corrupt. My
system allows me to stick another one into the farm without disrupting
the naming convention.
Good luck.
On Fri, 20 Feb 2004 23:53:54 -0000, "Neil Sargent"
"Jack MacDonald" <jackMACm...@telus.net> wrote in message
news:48ld30hgu1l9v49e7...@4ax.com...
> Neil
>
> A couple comments about your article
>
> 1. You are off-base to mention me and Michael Kaplan in the same
> sentence. He has forgotten more about Access Replication than I will
> ever know. We are leagues apart in knowledge.
>
Although I agree, do not underestimate the value of your own contribution.
Sometimes a different way of saying the same thing helps to get the message
over.
> 2. You are trying to force a particular replica to be the "base" in
> your farm. That concept does not apply. You cannot predict what
> replica will be used for sync, nor should you try. Forget about the
> individual replicas in the farm -- just consider it to be a single
> entity and let it do its thing. What is the "base" today may not be
> base tomorrow.
I accept this and understand it but I am not certain how to avoid it. See my
other thread.
>
> 3. It sounds like you are setting up a two-member farm on your laptop.
> Nothing wrong with that, but one of Michael Kaplan's recommendations
> is that you never interact directly with a farm member. It took me a
> while to understand why: you want to ensure that no human introduces a
> data conflict into a member of the replica farm. Ergo, you don't use a
> farm member as your "active editing" replica.
I am uncomfortable about active editing a farm member, but surely if a data
conflict arises it is going to propogate through the farm anyway. I think
there is nothing "wrong" with data conflicts per se (although they are best
avoided in the first place). I thought the reason not to edit a farm member
was to isolate the farm from some other general database corruption, locking
issue etc that may occur during "active editing" of a networked mdb.
The reason I used a farm replica as the working replica on the
satellite/laptop was to avoid the need to "double synch" to propogate
changes to head office. If I am on an "isolated" replica I must synch it
with a farm member, synch the whole farm and then indirect synch the farm to
head office. It is not obvious how to do this. I think I need to determine
the farm's synchronizer, determine its managed membership and synch to each
of its members. But if I indirect synch to head office, I have not brought
the changes into the farm. Should I synch with head office from the isolated
replica and then synch with one or more farm members?
>
> 4. Nothing in particular wrong with your naming convention, but I
> prefer to use something like "MyApplicationBEFarm1.mdb",
> "MyApplicationBEFarm2.mdb", etc. This naming convention implies that
> no replica is different than any others -- which is true. Your naming
> scheme implies that one replica carries special duties, which is
> untrue. Also, I have found that replicas occasionally go corrupt. My
> system allows me to stick another one into the farm without disrupting
> the naming convention.
>
> Good luck.
Accepted.
Thank you for your help. Shall we take this thread and tack it into the
other one to keep everything together?