Weuse the AI Management Daemon (Thanks Progress for doing it - it's fantastic). We are running under Linux FYI. The aiarcdir for us points to a directory on a different local filesystem. We then have a cron job that runs every 15 minutes (as root) looking for files in the aiarcdir. When it finds them there it copies moves them to a new directory (DIR-B), compresses them and sets their permissions appropriately.
A second cron job (running as a non-root user but with ssh keys that allow copying to an offsite location) runs 2 minutes after and using scp copies everything in the DIR-B folder to the offsite location. It does a sum -s at both ends and compares them to make sure the scp is 100% and then deletes the copy in DIR-B since it is now safely somewhere else.
If the remote site is not available (vpn is down) they will sit in the DIR-B folder until they are copied and checked. If the link goes down in the middle of a transfer the sum's won't match so it will try again later when the link comes up.
The first job is MOVING them - not copying. They are gone once the first script runs. After the second script runs (and they are copied to the remote location and checked) they are then removed completely from DIR-B.
So on the original machine only one (1) copy of the AI file(s). Moved from aiarcdir to DIR-B. Chown to useful account, compressed/gzipped. Two minutes later the second job runs that gets them offsite and removes them completely.
We do something similar to what pfred described: archive to a local directory on prod; from there, copy to remote system's input directory; remove from aiarcdir when successfully copied; a cron job on DR rolls forward files from the input directory to the target and moves them to an archive directory.
I had a client site where the primary aiarcdir (there were two) was an NFS share to DR and it didn't work well. There was a network disruption, the NFS mount was stale and file copies no longer happened. But the production system could still see the directory, even though it now had no contents, so the daemon didn't switch to the secondary aiarcdir and stopped archiving/switching extents. We couldn't even make it switch directories manually. The daemon got into an unresponsive state, couldn't be shut down or signalled, and eventually we had to restart the DB to recover the situation.
I used to use -aiarcdir "/some_nfs_mount,/some_local_fs" but I hit a situation in 11.2 where the AIMGT could not write to the NFS directory and rather than returning with an error, it froze and stopped ALL writes to the DB. The current theory is that the AIMGT froze while holding some latch but unfortunately PSTS cannot reproduce.
2) Instruct it to archive to multiple directories simultaneously. Write an error to the log if one is not available but, so long as at least one is writeable, continue processing normally. Continually re-check directories that were previously unavailable.
2a) An interesting option might be to not mark extents empty until they have been copied to all targets. This would make transient network problems and full filesystems less disruptive and enable "self healing" of the ai daemon in these cases.
@paul koufalis: /any/ program that writes to an NFS mounted filesystem might be blocked for an indeterminate amount of time. if that program is holding locks or has acquired other resources, they cannot be freed until the blocked process can continue. this behaviour is not limited nfs. it can happen with I/O operations on other kinds of filesystems and devices too. fortunately not often.
@gus: it seems I sometimes take certain operations for granted: I incorrectly assumed that the task would succeed or fail. It did neither and that's a little more difficult to script around. I can only hope that the archive to the local FS has a lower probability of freezing than the write to the NFS mount.
That depends on the script - if it's written to check if a prior script instance completed, and send an alert if it hasn't, that would cover off cases where the NFS copy failed with a hang, and would only leave the system exposed for the time period between AI archives plus the time for someone to respond with a correction.
Progress, Telerik, Ipswitch and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks for appropriate markings.
I have been trying to install dropbox on my Ubuntu 22.04 Linux machine for a while now. I am able to install the deb package, but after this it says it needs to download the proprietary dropbox daemon. This download takes a long time, it eventually starts to slow down the computer, and then it will eventually time out without completing the download. Does anyone know what could be going on or another way to download the proprietary daemon ?
Hm... That's sounding really strange. It really needs the daemon to be download on first launch after install; the daemon is not part of the package itself. This is usually not a problem and this task gets handled automatically by the control script, part of the install and responsible for the application launch. Are you sure your internet connection doesn't play some jokes to you? You can do the same by hand, using following command:
I have the same problem on Linux Mint, it is persistent. The Daemon installer does not start the download. Using the command line, the wget command also hangs. Only a manual download works. I doubt this is a network problem of mine, because it persists across dofferent networks and Linux Mint installations. Could someone contact Dropbox for this?
If neither package nor offline archive downloads, that's NOT a Dropbox issue! It's your system network configuration issue. Find out what's there and fix it. You can get additional details when use -v option with either curl or wget.
Remap all the path requests as relative to the given path.This is sort of "Git root" - if you run git daemon with--base-path=/srv/git on
example.com, then if you later try to pullgit://
example.com/hello.git, git daemon will interpret the pathas /srv/git/hello.git.
If --base-path is enabled and repo lookup fails, with this optiongit daemon will attempt to lookup without prefixing the base path.This is useful for switching to --base-path usage, while stillallowing the old paths.
Listen on a specific IP address or hostname. IP addresses canbe either an IPv4 address or an IPv6 address if supported. If IPv6is not supported, then --listen= is also not supported and--listen must be given an IPv4 address.Can be given more than once.Incompatible with --inetd option.
Like many programs that switch user id, the daemon does not resetenvironment variables such as $HOME when it runs git programs,e.g. upload-pack and receive-pack. When using this option, youmay also want to set and export HOME to point at the homedirectory of before starting the daemon, and make sure anyGit configuration files in that directory are readable by .
Enable/disable the service site-wide per default. Notethat a service disabled site-wide can still be enabledper repository if it is marked overridable and therepository enables the service with a configurationitem.
When informative errors are turned on, git-daemon will reportmore verbose errors to the client, differentiating conditionslike "no such repository" from "repository not exported". Thisis more convenient for clients, but may leak information aboutthe existence of unexported repositories. When informativeerrors are not enabled, all errors report "access denied" to theclient. The default is --no-informative-errors.
The remaining arguments provide a list of directories. If anydirectories are specified, then the git-daemon process willserve a requested directory only if it is contained in one ofthese directories. If --strict-paths is specified, then therequested directory must match one of these directories exactly.
These services can be globally enabled/disabled using thecommand-line options of this command. If finer-grainedcontrol is desired (e.g. to allow git archive to be runagainst only in a few selected repositories the daemon serves),the per-repository configuration file can be used to enable ordisable them.
This serves git send-pack clients, allowing anonymouspush. It is disabled by default, as there is noauthentication in the protocol (in other words, anybodycan push anything into the repository, including removalof refs). This is solely meant for a closed LAN settingwhere everybody is friendly. This service can beenabled by setting daemon.receivepack configuration item totrue.
In this example, the root-level directory /pub will containa subdirectory for each virtual host name supported.Further, both hosts advertise repositories simply asgit://
www.example.com/software/repo.git. For pre-1.4.0clients, a symlink from /software into the appropriatedefault repository could be made as well.
In this example, the root-level directory /pub will containa subdirectory for each virtual host IP address supported.Repositories can still be accessed by hostname though, assumingthey correspond to these IP addresses.
git daemon will set REMOTE_ADDR to the IP address of the clientthat connected to it, if the IP address is available. REMOTE_ADDR willbe available in the environment of hooks called whenservices are performed.
set system syslog archive size 100k
set system syslog archive files 3
set system syslog user * any alert
set system syslog user * daemon critical
set system syslog user * interactive-commands error
set system syslog host 10.10.120.161 any any
set system syslog host 10.10.120.162 any any
set system syslog file messages any info
set system syslog file messages authorization info
set system syslog file messages match RT_Screen
set system syslog file interactive-commands interactive-commands error
set system syslog file traffic-log any alert
set system syslog file traffic-log match RT_FLOW_SESSION
set system syslog file default-log any warning
set system syslog file policy_session user info
set system syslog file policy_session match RT_FLOW
set system syslog file policy_session archive size 1000k
set system syslog file policy_session archive world-readable
set system syslog file policy_session structured-data
3a8082e126