> I think it can be handled this way:
>
> When a connection is opened, user needs to supply a list({TypeName ::
> atom(), EncodeFun :: fun(), DecodeFun :: fun()}) or map(TypeName =>
> {EncodeFun, DecodeFun}). The connection will perform some queries to figure
> out the type mapping, array type can be handled automatically by appending
> "[]" to the type name. This option can be passed along with host name, port
> ... during creation as part of the pool setting.
>
> When extensions are created/dropped on the fly, users are responsible for
> restarting connections or calling update_type_map on all connection.
Yes, I think this is the most reasonable strategy. I actually started
coding up something like it on Friday:
cache_dynamic_types(C) ->
{ok, _Cols, Rows} = equery(C, "select typname, typarray, oid from
pg_catalog.pg_type"),
Types = [hstore],
lists:map(fun(T) ->
{Name, BinArrOid, BinOid} =
lists:keyfind(atom_to_binary(T, latin1), 1, Rows),
ArrOid = binary_to_integer(BinArrOid),
Oid = binary_to_integer(BinOid),
%% Stash the results in the process dictionary,
%% for both regular types and array types.
put({oid2type, Oid}, T),
put({type2oid, T}, Oid),
put({oid2type, ArrOid}, {array, T}),
put({type2oid, {array, T}}, ArrOid),
[{T, Oid}, {{array, T}, ArrOid}]
end, Types).
I won't likely be able to finish it up (for instance moving the
process dictionary stuff into the gen_server!) until Tuesday, so if
anyone else wants to roll with this, have at it! I agree this should
happen at connection time. It should also be exposed as a command for
weird cases when people want to regenerate the cache.