Jackc Pgx V4

0 views
Skip to first unread message

Rene Seiler

unread,
Aug 5, 2024, 3:19:08 AM8/5/24
to barravaco
Thetoolkit component is a related set of packages that implement PostgreSQL functionality such as parsing the wire protocoland type mapping between PostgreSQL and Go. These underlying packages can be used to implement alternative drivers,proxies, load balancers, logical replication clients, etc.

The database/sql interface only allows the underlying driver to return or receive the following types: int64,float64, bool, []byte, string, time.Time, or nil. Handling other types requires implementing thedatabase/sql.Scanner and the database/sql/driver/driver.Valuer interfaces which require transmission of values in text format. The binary format can be substantially faster, which is what the pgx interface uses.


pgx tests naturally require a PostgreSQL database. It will connect to the database specified in the PGX_TEST_DATABASE environmentvariable. The PGX_TEST_DATABASE environment variable can either be a URL or DSN. In addition, the standard PG* environmentvariables will be respected. Consider using direnv to simplify environment variablehandling.


pgx supports the same versions of Go and PostgreSQL that are supported by their respective teams. For Go that is the two most recent major releases and for PostgreSQL the major releases in the last 5 years. This means pgx supports Go 1.17 and higher and PostgreSQL 10 and higher. pgx also is tested against the latest version of CockroachDB.


This is a database/sql compatibility layer for pgx. pgx can be used as a normal database/sql driver, but at any time, the native interface can be acquired for more performance or PostgreSQL specific functionality.


Over 70 PostgreSQL types are supported including uuid, hstore, json, bytea, numeric, interval, inet, and arrays. These types support database/sql interfaces and are usable outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver.


pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).


pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sqlinterface as possible while providing better speed and access to PostgreSQL specific features. Importgithub.com/jackc/pgx/v4/stdlib to use pgx as a database/sql compatible driver.


pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field.They work in a similar fashion to database/sql. The second is to use a pointer to a pointer.


pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type.Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Goslice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directlymap to native Go types.


pgx includes support for the common data types like integers, floats, strings, dates, and times that have directmappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgtype library to support more types. Seedocumention for that library for instructions on how to implement custom types.


If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attemptto encode the underlying type. While this is usually desired behavior it can produce surprising behavior if one theunderlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. Itis recommended that this situation be avoided by implementing pgx interfaces on the renamed type.


Row values and composite types are represented as pgtype.Record ( =doc#Record).It is possible to get values of your custom type by implementing DecodeBinary interface. Decoding intopgtype.Record first can simplify process by avoiding dealing with raw protocol directly.


BeginFunc and BeginTxFunc are variants that begin a transaction, execute a function, and commit or rollback thetransaction depending on the return value of the function. These can be simpler and less error prone to use.


Prepared statements can be manually created with the Prepare method. However, this is rarely necessary because pgxincludes an automatic statement cache by default. Queries run through the normal Query, QueryRow, and Exec functions areautomatically prepared on first execution and the prepared statement is reused on subsequent executions. See ParseConfigfor information on how to customize or disable the statement cache.


Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts aCopyFromSource interface. If the data is already in a [][]interface use CopyFromRows to wrap it in a CopyFromSourceinterface. Or implement CopyFromSource to avoid buffering the entire data set in memory.


pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. SetLogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus,go.uber.org/zap, github.com/rs/zerolog, and the testing log are provided in the log directory.


pgx is compatible with PgBouncer in two modes. One is when the connection has a statement cache in "describe" mode. Theother is when the connection is using the simple protocol. This can be set with the PreferSimpleProtocol config option.


BeginFunc starts a transaction and calls f. If f does not return an error the transaction is committed. If f returnsan error the transaction is rolled back. The context will be used when executing the transaction control statements(BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of f.


BeginTxFunc starts a transaction with txOptions determining the transaction mode and calls f. If f does not returnan error the transaction is committed. If f returns an error the transaction is rolled back. The context will beused when executing the transaction control statements (BEGIN, ROLLBACK, and COMMIT) but does not otherwise affectthe execution of f.


CopyFrom requires all values use the binary format. Almost all typesimplemented by pgx use the binary format by default. Types implementingEncoder can only be used if they encode to the binary format.


It is strongly recommended that the connection be idle (no in-progress queries) before the underlying *pgconn.PgConnis used and the connection must be returned to the same state before any *pgx.Conn methods are again used.


Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the samename and sql arguments. This allows a code path to Prepare and Query/Exec withoutconcern for if the statement has already been prepared.


Query sends a query to the server and returns a Rows to read the results. Only errors encountered sending the queryand initializing Rows will be returned. Err() on the returned Rows must be checked after the Rows is closed todetermine if the query executed successfully.


The returned Rows must be closed before the connection can be used again. It is safe to attempt to read from thereturned Rows even if an error is returned. The error will be the available in rows.Err() after rows are closed. Itis allowed to ignore the error returned from Query and handle it in Rows.


Err() on the returned Rows must be checked after the Rows is closed to determine if the query executed successfullyas some errors can only be detected by reading the entire response. e.g. A divide by zero error on the last row.


For extra control over how the query is executed, the types QuerySimpleProtocol, QueryResultFormats, andQueryResultFormatsByOID may be used as the first args to control exactly how the query is executed. This is rarelyneeded. See the documentation for those types for details.


QueryFunc executes sql with args. For each row returned by the query the values will scanned into the elements ofscans and f will be called. If any row fails to scan or f returns an error the query will be aborted and the errorwill be returned.


SendBatch sends all queued queries to the server at once. All queries are run in an implicit transaction unlessexplicit transaction control statements are executed. The returned BatchResults must be closed before the connectionis used again.


A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initializedin. It uses the context it was initialized with for all operations. It implements these interfaces:


QueryFuncRow is an interface instead of a struct to allow tests to mock QueryFunc. However, adding a method to aninterface is technically a breaking change. Because of this the QueryFuncRow interface is partially excluded fromsemantic version requirements. Methods will not be removed or changed, but new methods may be added.


Row is an interface instead of a struct to allow tests to mock QueryRow. However,adding a method to an interface is technically a breaking change. Because of thisthe Row interface is partially excluded from semantic version requirements.Methods will not be removed or changed, but new methods may be added.


Rows is the result set returned from *Conn.Query. Rows must be closed beforethe *Conn can be used again. Rows are closed by explicitly calling Close(),calling Next() until it returns false, or when a fatal error occurs.


Rows is an interface instead of a struct to allow tests to mock Query. However,adding a method to an interface is technically a breaking change. Because of thisthe Rows interface is partially excluded from semantic version requirements.Methods will not be removed or changed, but new methods may be added.


Tx is an interface instead of a struct to enable connection pools to be implemented without relying on internal pgxstate, to support pseudo-nested transactions with savepoints, and to allow tests to mock transactions. However,adding a method to an interface is technically a breaking change. If new methods are added to Conn it may bedesirable to add them to Tx as well. Because of this the Tx interface is partially excluded from semantic versionrequirements. Methods will not be removed or changed, but new methods may be added.

3a8082e126
Reply all
Reply to author
Forward
0 new messages