I'm aware of the following problems that need a protocol change to fix
them:
(1) Add an optional textual message to NOTIFY
(2) Remove the hard-coded limits on database and user names
(SM_USER, SM_DATABASE), replace them with variable-length
fields.
(3) Remove some legacy elements in the startup packet
('unused' can go -- perhaps 'tty' as well). I think the
'length' field of the password packet is also not used,
but I'll need to double-check that.
(4) Fix the COPY protocol (Tom?)
(5) Fix the Fastpath protocol (Tom?)
(6) Protocol-level support for prepared queries, in order to
bypass the parser (and maybe be more compatible with the
implementation of prepared queries in other databases).
(7) Include the current transaction status, since it's
difficult for the client app to determine it for certain
(Tom/Bruce?)
If I've missed anything or if there is something you think we should
add, please let me know.
I can implement (1), (2), (3), and possibly (7), if someone can tell
me exactly what is required (my memory of the discussion relating to
this is fuzzy). The rest is up for grabs.
Finally, how should we manage the transition? I wasn't around for the
earlier protocol changes, so I'd appreciate any input on steps we can
take to improve backward-compatibility.
Cheers,
Neil
--
Neil Conway <ne...@samurai.com> || PGP Key ID: DB3C29FC
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majo...@postgresql.org so that your
message can get through to the mailing list cleanly
<snip>
>
> If I've missed anything or if there is something you think we should
> add, please let me know.
Is there any thought about changing the protocol to support
two-phase commit? Not that 2PC and distributed transactions
would be implemented in 7.4, but to prevent another protocol
change in the future?
Mike Mascari
mas...@mascari.com
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
My understanding is that 2PC is one way to implement multi-master
replication. If that's what you're referring to, then I'm not sure I
see the point: the multi-master replication project (pgreplication)
doesn't use 2PC, due to apparent scalability problems (not to mention
that it also uses a separate channel for communications between
backends on different nodes).
Cheers,
Neil
--
Neil Conway <ne...@samurai.com> || PGP Key ID: DB3C29FC
Actually, I was thinking along the lines of a true CREATE
DATABASE LINK implementation, where multiple databases could
participate in a distributed transaction. That would require the
backend in which the main query is executing to act as the
"coordinator" and each of the other participating databases to
act as "cohorts". And would require a protocol change to support
the PREPARE, COMMIT-VOTE/ABORT-VOTE reply, and an ACK message
following the completion of the distributed COMMIT or ABORT.
Mike Mascari
mas...@mascari.com
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
Right, you need TPC in order for pgsql to participate in transactions
that span anything outside the DB proper. A DB link is one example,
or an external transaction manager that coordinates DB and filesystem
updates, for example. Zope could use this, to coordinate the DB with
it's internal object store.
Ross
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
Mike Mascari <mas...@mascari.com> wrote:
> Is there any thought about changing the protocol to support
> two-phase commit? Not that 2PC and distributed transactions
> would be implemented in 7.4, but to prevent another protocol
> change in the future?
I'm now implementing 2PC replication and distributed transaction. My 2PC
needs some supports in startup packet to establish a replication session
or a recovery session.
BTW, my 2PC replication is working, and I'm implementing 2PC recovery now.
--
NAGAYASU Satoshi <sn...@snaga.org>
If not, perhaps (if you have the time) you could put together a post
describing your work. Like
Is it an internal or external solution. Are you sending SQL or tuples
in your update messages.
How are you handling failure detection? Is this partial or full
replication?
Please forgive me for asking so many questions, but I'm rather intrigued
by database
replication.
Darren
>
>
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majo...@postgresql.org)
Darren Johnson <dar...@up.hrcoxmail.com> wrote:
> I would like to here more about your implementation. Do you have some
> documentation that I
> could read?
Documentation is not available, but I have some slides for my presentation.
http://snaga.org/pgsql/20021018_2pc.pdf
Some answers for your questions may be in these slides.
And a current source code is available from:
http://snaga.org/pgsql/pgsql-20021025.tgz
> If not, perhaps (if you have the time) you could put together a post
> describing your work. Like
> Is it an internal or external solution. Are you sending SQL or tuples
> in your update messages.
> How are you handling failure detection? Is this partial or full
> replication?
It is an internal solution. In 2PC, pre-commit and commit are required.
So my implementation has some internal modifications on transaction
handling, log recording and else.
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
BEGIN;
issue some commands ...
PRECOMMIT;
-- if the above does not return an error, then
COMMIT;
In other words, 2PC would require some new commands, but a new command
doesn't affect the protocol layer.
regards, tom lane
Ah, thanks for pointing that out. Error codes would be another thing
we can ideally support in 7.4, and we'd need a protocol change to do
it properly, AFAICS. IIRC, Peter E. expressed some interest in doing
this...
Cheers,
Neil
--
Neil Conway <ne...@samurai.com> || PGP Key ID: DB3C29FC
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majo...@postgresql.org
I think a precommit-vote-commit phase of 2PC can be implemented in
command-lavel or protocol-level.
In command-level 2PC, an user application (or application programmer)
must know the DBMS is clustered or not (to use PRECOMMIT command).
In protocol-layer 2PC, no new SQL command is required.
A precommit-vote-commit phase will be called implicitly. It means an
user application can be used without any modification. An application
can use a traditional way (BEGIN...COMMIT).
So I made my decision to use protocol-layer implementation.
It doesn't affect the SQL command layer.
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
(8) Error codes (maybe needn't change protocol)
- without this is PostgreSQL useless in real DB aplication
(9) Think about full dynamic charset encoding (add new encoding on
the fly)
Karel
--
Karel Zak <za...@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
<br><font size=2 face="sans-serif">On 11/05/2002 04:42:55 AM Neil Conway wrote:<br>
> Mike Mascari <mas...@mascari.com> writes:<br>
> > Is there any thought about changing the protocol to support<br>
> > two-phase commit? Not that 2PC and distributed transactions would be<br>
> > implemented in 7.4, but to prevent another protocol change in the<br>
> > future?<br>
> <br>
> My understanding is that 2PC is one way to implement multi-master<br>
> replication. If that's what you're referring to, then I'm not sure I<br>
</font>
<br><font size=2 face="sans-serif">Another use of two-phase commit is in messaging middleware (MOM, message oriented middleware), were both the middleware and the database participate in the same transaction. Consider:</font>
<br>
<br><font size=2 face="sans-serif">- DB: begin</font>
<br><font size=2 face="sans-serif">- MOM: begin</font>
<br><font size=2 face="sans-serif">- DB: insert</font>
<br><font size=2 face="sans-serif">- MOM: send message</font>
<br><font size=2 face="sans-serif">- DB: prepare</font>
<br><font size=2 face="sans-serif">- MOM: prepare ==> fails</font>
<br><font size=2 face="sans-serif">- DB: rollback</font>
<br><font size=2 face="sans-serif">- MOM: rollback</font>
<br>
<br><font size=2 face="sans-serif">just a simple example...</font>
<br>
<br><font size=2 face="sans-serif">Maarten</font>
<CODE><FONT SIZE=3><BR>
<BR>
------------------------------------------------------------- ---<BR>
Visit our Internet site at http://www.reuters.com<BR>
<BR>
Get closer to the financial markets with Reuters Messaging - for more<BR>
information and to register, visit http://www.reuters.com/messaging<BR>
<BR>
Any views expressed in this message are those of the individual<BR>
sender, except where the sender specifically states them to be<BR>
the views of Reuters Ltd.<BR>
</FONT></CODE>
If application continues to use just BEGIN/COMMIT, then the protocol
level must parse command stream and recognize COMMIT in order to replace
it with PRECOMMIT, COMMIT.
If the communication library has to do that anyway, it could still do
the replacement without affecting wire protocol, no ?
------------------
Hannu
Hannu Krosing <ha...@tm.ee> wrote:
> > I think a precommit-vote-commit phase of 2PC can be implemented in
> > command-lavel or protocol-level.
> >
> > In command-level 2PC, an user application (or application programmer)
> > must know the DBMS is clustered or not (to use PRECOMMIT command).
> >
> > In protocol-layer 2PC, no new SQL command is required.
> > A precommit-vote-commit phase will be called implicitly. It means an
> > user application can be used without any modification. An application
> > can use a traditional way (BEGIN...COMMIT).
>
> If application continues to use just BEGIN/COMMIT, then the protocol
> level must parse command stream and recognize COMMIT in order to replace
> it with PRECOMMIT, COMMIT.
>
> If the communication library has to do that anyway, it could still do
> the replacement without affecting wire protocol, no ?
In my implementation, 'the extended(2PC) FE/BE protocol' is used only in
the communication between the master and slave server(s), not between a
client app and the master server.
libpq <--Normal FE/BE--> (master)postgres <--Extended(2PC)FE/BE--> (slave)postgres
<--Extended(2PC)FE/BE--> (slave)postgres
<--Extended(2PC)FE/BE--> (slave)postgres
A client application and client's libpq can work continuously without
any modification. This is very important. And protocol modification
between master and slave server(s) is not so serious issue (I think).
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
No, I think Satoshi is suggesting that from the client's point of view,
he's embedded the entire precommit-vote-commit cycle inside the COMMIT
command.
> In my implementation, 'the extended(2PC) FE/BE protocol' is used only in
> the communication between the master and slave server(s), not between a
> client app and the master server.
>
> libpq <--Normal FE/BE--> (master)postgres <--Extended(2PC)FE/BE--> (slave)postgres
> <--Extended(2PC)FE/BE--> (slave)postgres
> <--Extended(2PC)FE/BE--> (slave)postgres
>
> A client application and client's libpq can work continuously without
> any modification. This is very important. And protocol modification
> between master and slave server(s) is not so serious issue (I think).
>
Ah, but this limits your use of 2PC to transparent DB replication - since
the client doesn't have access to the PRECOMMIT phase (usually called
prepare phase, but that's anothor overloaded term in the DB world!) it
_can't_ serve as the transaction master, so the other use cases that
people have mentioned here (zope, MOMs, etc.) wouldn't be possible.
Hmm, unless a connection can be switched into 2PC mode, so something
other than a postgresql server can act as the transaction master.
Does your implementation cascade? Can slaves have slaves?
Ross
"Ross J. Reedstrom" <reed...@rice.edu> wrote:
> > > If application continues to use just BEGIN/COMMIT, then the protocol
> > > level must parse command stream and recognize COMMIT in order to replace
> > > it with PRECOMMIT, COMMIT.
> > >
> > > If the communication library has to do that anyway, it could still do
> > > the replacement without affecting wire protocol, no ?
>
> No, I think Satoshi is suggesting that from the client's point of view,
> he's embedded the entire precommit-vote-commit cycle inside the COMMIT
> command.
Exactly. When user send the COMMIT command to the master server, the
master.talks to the slaves to process precommit-vote-commit using the
2PC. The 2PC cycle is hidden from user application. User application
just talks the normal FE/BE protocol.
>
> > In my implementation, 'the extended(2PC) FE/BE protocol' is used only in
> > the communication between the master and slave server(s), not between a
> > client app and the master server.
> >
> > libpq <--Normal FE/BE--> (master)postgres <--Extended(2PC)FE/BE--> (slave)postgres
> > <--Extended(2PC)FE/BE--> (slave)postgres
> > <--Extended(2PC)FE/BE--> (slave)postgres
> >
> > A client application and client's libpq can work continuously without
> > any modification. This is very important. And protocol modification
> > between master and slave server(s) is not so serious issue (I think).
> >
>
> Ah, but this limits your use of 2PC to transparent DB replication - since
> the client doesn't have access to the PRECOMMIT phase (usually called
> prepare phase, but that's anothor overloaded term in the DB world!) it
> _can't_ serve as the transaction master, so the other use cases that
> people have mentioned here (zope, MOMs, etc.) wouldn't be possible.
>
> Hmm, unless a connection can be switched into 2PC mode, so something
> other than a postgresql server can act as the transaction master.
I think the client should not act as the transaction master. But if it
is needed, the client can talk to postgres servers with the extended 2PC
FE/BE protocol.
Because all postgres servers(master and slave) can understand both the
normal FE/BE protocol and the extended 2PC FE/BE protocol. It is
switched in the startup packet.
See 10 page.
http://snaga.org/pgsql/20021018_2pc.pdf
I embeded 'the connection type' in the startup packet to switch postgres
backend's behavior (normal FE/BE protocol or 2PC FE/BE protocol).
In current implementation, if the connection type is 'R', it is handled
as the 2PC FE/BE connection (replication connection).
> Does your implementation cascade? Can slaves have slaves?
It is not implemented, but I hope so. :-)
And I think it is not so difficult.
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
But _can_ client (libpq/jdbc/...) also talk 2PC FE/BE protocol, i.e. act
as "master" ?
> > > In my implementation, 'the extended(2PC) FE/BE protocol' is used only in
> > > the communication between the master and slave server(s), not between a
> > > client app and the master server.
> > >
> > > libpq <--Normal FE/BE--> (master)postgres <--Extended(2PC)FE/BE--> (slave)postgres
> > > <--Extended(2PC)FE/BE--> (slave)postgres
> > > <--Extended(2PC)FE/BE--> (slave)postgres
> > >
> > > A client application and client's libpq can work continuously without
> > > any modification. This is very important. And protocol modification
> > > between master and slave server(s) is not so serious issue (I think).
> > >
> >
> > Ah, but this limits your use of 2PC to transparent DB replication - since
> > the client doesn't have access to the PRECOMMIT phase (usually called
> > prepare phase, but that's anothor overloaded term in the DB world!) it
> > _can't_ serve as the transaction master, so the other use cases that
> > people have mentioned here (zope, MOMs, etc.) wouldn't be possible.
> >
> > Hmm, unless a connection can be switched into 2PC mode, so something
> > other than a postgresql server can act as the transaction master.
>
> I think the client should not act as the transaction master. But if it
> is needed, the client can talk to postgres servers with the extended 2PC
> FE/BE protocol.
>
> Because all postgres servers(master and slave) can understand both the
> normal FE/BE protocol and the extended 2PC FE/BE protocol. It is
> switched in the startup packet.
Why is the protocol change neccessary ?
Is there some fundamental reason that the slave backends can't just wait
and see if the first "commit" command is PRECOMMIT or COMMIT and then
act accordingly on for each transaction ?
-----------------
Hannu
---------------------------(end of broadcast)---------------------------
"Ross J. Reedstrom" <reed...@rice.edu> wrote:
> > Because the postgres backend must detect a type of incomming connection
> > (from the client app or the master).
> >
> > If it is comming from the client, the backend relays the queries to the
> > slaves (act as the master).
> >
> > But if it is comming from the master server, the backend must act as a
> > slave, and does not relay the queries.
>
> So, your replication is an all-or-nothing type of thing? you can't
> replicate some tables and not others? If only some tables are replicated,
> then you can't decide if this is a distributed transaction until it's
> been parsed.
Yes. My current replication implementation is 'query based' replication,
so all queries to the database (except SELECT command) are replicated.
The database will be completely replicated, not partial.
I know this 'query based' design can't be used for a distributed
transaction. I think more internal communication between distributed
servers is required. We need 'the partial transfer of tables', 'the
bulk transfer of the index' or something like that for a distributed
transaction. I'm working for it now.
As I said, several connection types, a client application connection, an
internal transfer connection or a recovery connection, will be required
on replication and distributed transaction in near future. Embedding
connection types in the startup packet is a good way to decide how the
backend should behave. It is simple and extendable, isn't it?
If the backend can't understand the incoming connection type, the
backend will answer "I can't understand." and need only disconnect it.
>
> Also, if we want to cascade, then one server can be both master and slave,
> as it were. For full-on-2PC, I'm not sure cascading is a good idea, but
> it's something to consider, especially if there's provisions for partial
> replication, or 'optional' slaves.
Yes. There are several implementation designs for replication. Sync or
async, pre- or post-, full or partial, query-level or I/O-level or
journal-level. I think there is no "best way" for replication, because
applications have different requirements in different situations.
So the protocol should be more extendable.
> I think Hannu is suggesting that COMMIT could occur from either of two
> states in the transaction state diagram: from an open transaction, or
> from PRECOMMIT. There's no need to determine before that moment if
> this particular transaction is part of a 2PC or not, is there? So, no
> you don't _require_ PRECOMMIT/COMMIT because it's clustered: if a
> 'bare' COMMIT shows up, do what you currently do: hide the details.
> If a PRECOMMIT shows up, report status back to the 'client'.
After status is returned, what does the 'client' do?
Should the client talk the 2PC protocol?
For example, if the database is replicated in 8 servers,
does the client application keep 8 connections for each server?
Is this good?
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
> Because the postgres backend must detect a type of incomming connection
> (from the client app or the master).
>
> If it is comming from the client, the backend relays the queries to the
> slaves (act as the master).
>
> But if it is comming from the master server, the backend must act as a
> slave, and does not relay the queries.
So, your replication is an all-or-nothing type of thing? you can't
replicate some tables and not others? If only some tables are replicated,
then you can't decide if this is a distributed transaction until it's
been parsed.
Also, if we want to cascade, then one server can be both master and slave,
as it were. For full-on-2PC, I'm not sure cascading is a good idea, but
it's something to consider, especially if there's provisions for partial
replication, or 'optional' slaves.
>
> I think there are several types of connection in the sync replication or
> the distributed transaction. Especially, the bulk transfer of tables or
> indexes will be neccesary for the distributed query in future.
>
> So, I think embedding the connection type information in the startup
> packet is a good idea.
>
> >
> > Is there some fundamental reason that the slave backends can't just wait
> > and see if the first "commit" command is PRECOMMIT or COMMIT and then
> > act accordingly on for each transaction ?
>
> Are two "commit" commands required on the clustered postgres?
> And is one "commit" command required on the single postgres?
I think Hannu is suggesting that COMMIT could occur from either of two
states in the transaction state diagram: from an open transaction, or
from PRECOMMIT. There's no need to determine before that moment if
this particular transaction is part of a 2PC or not, is there? So, no
you don't _require_ PRECOMMIT/COMMIT because it's clustered: if a
'bare' COMMIT shows up, do what you currently do: hide the details.
If a PRECOMMIT shows up, report status back to the 'client'.
So, it seems to me that the minimum protocol change necessary to support
this model is reporting the current transaction status to the client.
> I think it will confuse the application programmer.
I think your mental image of an application programmer needsto be
expanded: it should also include middleware vendors, who very much want
to be able to control a distributed transaction, one part of which may
be a postgresql replicated cluster.
Ross
Mike Mascari <mas...@mascari.com> wrote:
> Is there any thought about changing the protocol to support
> two-phase commit? Not that 2PC and distributed transactions
> would be implemented in 7.4, but to prevent another protocol
> change in the future?
I'm now implementing 2PC replication and distributed transaction. My 2PC
needs some support in startup packet to establish a replication session
and a recovery session.
BTW, 2PC replication is working, and I'm implementing 2PC recovery now.
--
NAGAYASU Satoshi <sn...@snaga.org>
---------------------------(end of broadcast)---------------------------
regards
Haris Peco
On Tuesday 05 November 2002 01:14 am, Satoshi Nagayasu wrote:
> Hi all,
>
> Mike Mascari <mas...@mascari.com> wrote:
> > Is there any thought about changing the protocol to support
> > two-phase commit? Not that 2PC and distributed transactions
> > would be implemented in 7.4, but to prevent another protocol
> > change in the future?
>
> I'm now implementing 2PC replication and distributed transaction. My 2PC
> needs some support in startup packet to establish a replication session
> and a recovery session.
>
> BTW, 2PC replication is working, and I'm implementing 2PC recovery now.
Here are a couple of other changes you might consider (maybe these changes
already exist and I just don't know about them):
a) Make much of the metadata sent to the client optional. When I execute
20 fetches against the same cursor, I don't need the same metadata 20
times. For narrow result sets, the metadata can easily double or triple
the number of bytes sent across the net. It looks like the protocol needs
the field count, but everything else seems to be sent for the convenience
of the client application.
b) Send a decoded version of atttypmod - specifically, decode the
precision and scale for numeric types.
regards
Haris Peco
Type is returned by PQftype(), length is returned by PQfsize(). Precision
and scale are encoded in the return value from PQfmod() and you have to
have a magic decoder ring to understand them. (Magic decoder rings are
available, you just have to read the source code :-)
PQftype() is not easy to use because it returns an OID instead of a name
(or a standardized symbol), but I can't think of anything better to return
to the client. Of course if you really want to make use of PQftype(), you
can preload a client-side cache of type definitions. I seem to remember
seeing a patch a while back that would build the cache and decode precision
and scale too.
PQfsize() is entertaining, but not often what you really want (you really
want the width of the widest value in the column after conversion to some
string format - it seems reasonable to let the client applicatin worry
about that, although maybe that would be a useful client-side libpq function).
regards
Haris Peco
---------------------------(end of broadcast)---------------------------
A full tarball is based on 7.3devel.
There is no patch for 7.4devel.
--
NAGAYASU Satoshi <sn...@snaga.org>
But this will make such a view terribly slow, as ith has to do
max(length(field)) over the whole table for any field displayed
------------
Hannu