Database Complete Book

4 views
Skip to first unread message

Baldomero Prado

unread,
Aug 5, 2024, 2:05:40 AM8/5/24
to leslipeachtcon
Designedfor academic institutions, this database is a leading resource for scholarly research. It supports high-level research in the key areas of academic study by providing journals, periodicals, reports, books and more.

Ranked as the second largest database within the Academic Search family, Academic Search Complete stands as a distinguished resource for scholarly investigation. Providing access to 5,795 active full-text journals with a journal retail value of $3,013,660.89, this multidisciplinary database supports high-level research in the key areas of academic study.


EBSCO databases support all learning types through textual and visual subject browse and information literacy training through subject access points in more than 30 languages. Watch video to learn more.


From what I've read, and it could be wrong, finding a good PostgreSQL blog has been challenging so please feel free to recommend some to me, I need to figure out how this app works so I can have trust in my backups and Slony replication. I had a developer restore a backup I took from PgadminIII via custom, directory, and tar format while selecting OIDs but he said two of them didn't load, tar did but it was only the directory, not the data. I'm really confused now.


pg_dumpall apparently has a -globals option that's supposed to backup everything, but the help for pg_dumpall shows a -g, --globals-only dump only global objects, no databases, not a --globals option.


I thought pg_dumpall would at least backup foreign keys, but even that seems to be an 'option'. According to the documentation, even with pg_dumpall I need to use a -o option to backup foreign keys, I can't really imagine when I wouldn't want to backup foreign keys and this would make more sense as a default options.


The Postgres documentation mentions that the globals option I was looking for seems to be a default option on this version, but it still needs the -o option. If someone can verify or give me an example command to restore a single database elsewhere with everything it needs I'd appreciate it.


Edit: being asked by site to show the uniqueness of this question by editing my question. This question raises the issue and get's clarity on OIDs in backups, the difference between globals and non globals, as well as testing restores recommendations to ensure the backup is good as opposed to just backing up. Due to the answers I was able to backup, figure out globals/oids, and started a test restore process nightly on Postgres using cron jobs. Thanks for the help!


You can dump the whole PostgreSQL cluster with pg_dumpall. That's all the databases and all the globals for a single cluster. From the command line on the server, I'd do something like this. (Mine's listening on port 5433, not on the default port.) You may or may not need the --clean option.


No, that reference says "Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used." (Emphasis added.) I think it's unlikely that your application references the OID columns. You don't need to use this option to "backup foreign keys". (Read the dump file in your editor or file viewer.)


In a complete database restore, the goal is to restore the whole database. The whole database is offline for the duration of the restore. Before any part of the database can come online, all data is recovered to a consistent point in which all parts of the database are at the same point in time and no uncommitted transactions exist.


Under the full recovery model, after you restore your data backup or backups, you must restore all subsequent transaction log backups and then recover the database. You can restore a database to a specific recovery point within one of these log backups. The recovery point can be a specific date and time, a marked transaction, or a log sequence number (LSN).


When restoring a database, particularly under the full recovery model or bulk-logged recovery model, you should use a single restore sequence. A restore sequence consists of one or more restore operations that move data through one or more of the phases of restore.


We recommend that you do not attach or restore databases from unknown or untrusted sources. These databases could contain malicious code that might execute unintended Transact-SQL code or cause errors by modifying the schema or the physical database structure. Before you use a database from an unknown or untrusted source, run DBCC CHECKDB on the database on a non-production server. Also, examine the user-written code in the database, such as stored procedures or other user-defined code.


Under the bulk-logged recovery model, backing up any log that contains bulk-logged operations requires access to all data files in the database. If the data files cannot be accessed, the transaction log cannot be backed up. In that case, you have to manually redo all changes that were made since the most recent log backup.


The following illustration shows this restore sequence. After a failure occurs (1), a tail-log backup is created (2). Next, the database is restored to the point of the failure. This involves restoring a database backup, a subsequent differential backup, and every log backup taken after the differential backup, including the tail-log backup.


The following Transact-SQL example shows the essential options in a restore sequence that restores the database to the point of failure. The example creates a tail-log backup of the database. Next, the example restores a full database backup and log backup and then restores the tail-log backup. The example recovers the database in a separate, final step.


This example uses a database backup and log backup that is created in the "Using Database Backups Under the Full Recovery Model" section in Full Database Backups (SQL Server). Before the database backup, the AdventureWorks2022 sample database was set to use the full recovery model.


Under the full recovery model, a complete database restore can usually be recovered to a point of time, a marked transaction, or an LSN within a log backup. However, under the bulk-logged recovery model, if the log backup contains bulk-logged changes, point-in-time recovery is not possible.


The following example assumes a mission-critical database system for which a full database backup is created daily at midnight, a differential database backup is created on the hour, Monday through Saturday, and transaction log backups are created every 10 minutes throughout the day. To restore the database to the state is was in at 5:19 A.M. Wednesday, do the following:


All of sudden I keep getting this warning "Performance warning: connecting to the database server did not complete in a timely fashion. This event usually indicates that the database server is overloaded." and performance is dreadful while navigating through the repository. I tested moving the VMs back to the old server and performance is great again.


Was Laserfiche able to resolve this issue? I am seeing the same warning in the event viewer even though our SQL server is not overloaded. We run Laserfiche Server service on its' own VM server which has 32 cores and 128 gig of ram.


There isn't enough information in this thread or your post to hazard a guess. The message is reported by the Laserfiche Server if it takes more than 1 second to establish a connection with SQL Server. There could be various reasons why the connection between 2 machines is slow, the message indicates the most common one. Given your server specs it's possible that it may be a network issue rather than load on SQL, but we can't rule it out.


The connection taking a long time is just a symptom, so the resolution is going to depend on the underlying cause. In the original post, performance is uniformly bad, so that at least rules out that it's something specific to establishing a connection. From the Laserfiche server, all it knows is that responses aren't coming back as quickly as they should. The obvious places for the time to be lost is 1) queries are executed slowly, and if so the most likely cause is that the server is overwhelmed - this is what the error message gets at - or 2) general network slowness between the Laserfiche server and the SQL server.


If load is light on the SQL server, and a tool like Profiler indicates that the queries are executing quickly, then you're probably looking at a network problem. In that case, you'll fallback on the typical troubleshooting process for that: ping, Wireshark, firewall logs, testing latency from/to other machines, etc.


This chapter assumes that some or all of your data files are lost or damaged. Typically, this situation is caused by a media failure or accidental deletion. Your goal is to return the database to normal operation by restoring the damaged files from RMAN backups and recovering all database changes.


You have the complete set of archived redo logs and incremental backups needed for recovery of your data file backups. Every data file either has a backup, or a complete set of online and archived redo logs goes back to the creation of a data file with no backup.


The control file knows about the data file, that is, you backed up the control file after data file creation, but the data file itself is not backed up. If the data file record is in the control file, then RESTORE creates the data file in the original location or in a user-specified location. The RECOVER command can then apply the necessary logs to the data file.


The control file does not have the data file record, that is, you did not back up the control file after data file creation. During recovery, the database detects the missing data file and reports it to RMAN, which creates a data file and continues recovery by applying the remaining logs. If the data file was created in a parent incarnation, then it is created during the restore or recovery phase as appropriate.


Zero Data Loss Recovery Appliance (Recovery Appliance) substantially reduces the window of potential data loss that exists between successive archived redo log backups. You to recover target databases to within a few subseconds of a database failure.

3a8082e126
Reply all
Reply to author
Forward
0 new messages