Note: I'm only covering postgres here but the idea is pretty universal.
Good pagination starts with good sorting, so I'd suggest starting with having a good primary key. Integer PK could be enough, could be not quite (custom generator + postgres uuid field would be the next logical choice for me, something timestamp-based but small and fast in any case). I'd supplement that with a secondary id (modification/version id, exactly the same as pk but reset on every save). This way you can cover two most important sorting: by creation order and by modification order (you can even implement some simple data versioning if you use both as a composite pk but that's another story). Then, if you need sorting on other fields it would be extremely wise to create indices for every single sorting option (if the field you sort on is not unique, use index_together and pk and/or version id as the last field).
Now, having that, you can start with the actual pagination. Since you always sort by a unique keyset in this scheme, you should be able to paginate using greater than comparison and to make it generic enough let's assume you always sort/paginate by multiple fields (arbitrary number of them). You have to somehow pass the information about which fields you want to sort on and which keyset you want to compare with from client to server, you can invent your own weird way to do it but the best (I think) would be to use standard query filters and (multiple, if needed) order_by. "Cursor" pagination in Django Rest Framework, for example, is essentially this but supports only a single sorting field (cannot use unique keysets to supplement non-unique primary sorting fields) and for reason unknown encodes the filters to base64 bloating the size enormously. With this proposed scheme everything is transparent and flexible if just slightly verbose (you can always do something similar with base64 encoding or whatnot if you want it to become less transparent and longer). Your url would look kinda like this: /view/?order_by=slug&order_by=-id&slug_lt=some_title&id_lt=123&limit=50 (there could be less verbose options like ?o=-field1+field2-field3&k=+some_title&k=-123&l=50 but more urlsafe if you really care about a few bytes, but mostly just make your param names configurable and you should be fine) -- if you absolutely have to use non-unique keysets you can augment this by adding optional offset (small offset with any gt/lt is better than big offset and no filtering). Next, you need to calculate the next/previous pages from the current, there were some links above that explain it at length but basically to keep it stateless you should always query an extra item and use its value with >= for the next page and with < and reversed order for the previous.
Another beauty of keyset pagination is that it doesn't have to be exact. If an object was removed, comparisons will continue working. You can also trivially construct keysets for getting first and last pages (paginating back from the last would produce different pages but I don't think it matters that much in most real-life scenarios).
Keyset has a few disadvantages over page number, mainly no random page access. If you want to have it at the cost of slower queries, you can add optional offset (mentioned above) and run a count(*) query like page number paginator does (or perhaps run some faster estimation). Page number is just a shorter representation of limit-offset pagination when you now the total count, and combined with keyset it would still be much faster on long tables than just limit-offset (or page number).
This is roughly how I plan to implement it although I don't dream of building anything universal, I just want a small reusable module so I don't have to reimplement this in every project that needs it. If you ever release any code, feel free to ping me for review or testing, chances are I like your library and won't have to implement mine :) I have too many library ideas and not enough time to work on them all, collaboration could be much more better.
Ivan.