Loose proposal: AccountPatchReviews in NoteDb

179 views
Skip to first unread message

Dave Borowitz

unread,
Mar 9, 2016, 4:48:03 PM3/9/16
to repo-discuss
Below is a slightly edited proposal that I made to Shawn a while ago about how to store AccountPatchReviews in NoteDb. As I'd like to look at this topic during the hackathon next week, I figured I should share it to a wider audience.

Shawn's reaction was basically, "sure, but it'd be easier to just use a local H2 database." By making the possibly overengineered proposal below, I'm not ruling out the easy hack.


This is a tricky one. It shouldn't go in the change meta DAG; this is metadata private to a user, and would be really noisy in the history

AccountPatchReviews has tuples of (patch set id, file, account id).

Keep in mind as we're discussing: each user will probably have thousands of reviewed bits, growing without bound. (Contrast with stars, which grow much slower and users might periodically manually clean them out.)

My semi-thought-through idea is: store it in All-Users by account ID, one ref per use, in refs/reviewed/CD/ABCD. This is a notes ref where the SHA-1s are patch set revisions.

Contents of a file in the notes ref is like:
# change,ps,path
1,1,path/to/bar.txt
1,1,path/to/foo.txt
2,3,"path/to/\nbaz.txt"

Include change and patch set because the same SHA-1 can show up in multiple projects/branches. Sorted so we can binary search it or something as a later optimization.

This is all well and good except:

1. Write latency and throughput will probably suck on googlesource.com. We already explicitly optimize write latency sucking in our current glue code. Possible optimization: batch a few seconds' worth of writes up in memory in process and trade off durability for fewer writes.

2. These will have giant amounts of history; I would probably opt for just squashing, we don't really need an audit log here.

All that said, there's no alternative that doesn't involve one ref update per bit flip (modulo batching as mentioned above). The only alternative that doesn't involve writing a couple objects per bit flip is to store these all in refs but, remember, many thousands of bits per user. (A quick count shows about 7.5M bits for all of $large_internal_site.)

Martin Fick

unread,
Mar 9, 2016, 6:19:20 PM3/9/16
to repo-d...@googlegroups.com, Dave Borowitz
On Wednesday, March 09, 2016 04:47:40 PM 'Dave Borowitz' via
Repo and Gerrit Discussion wrote:
...
> Shawn's reaction was basically, "sure, but it'd be easier
> to just use a local H2 database." By making the possibly
> overengineered proposal below, I'm not ruling out the
> easy hack.

H2 would not work (well) in a multi-master setup.

...
> This is a tricky one. It shouldn't go in the change meta
> DAG; this is metadata private to a user, and would be
> really noisy in the history

Agreed, because it really isn't about the change.

..
> This is a notes ref where the SHA-1s are patch set
> revisions.
>
> Contents of a file in the notes ref is like:
> # change,ps,path
> 1,1,path/to/bar.txt
> 1,1,path/to/foo.txt
> 2,3,"path/to/\nbaz.txt"
>
> Include change and patch set because the same SHA-1 can
> show up in multiple projects/branches.

Hmm, what if we decided that reviewed really is per sha?
Afterall, if you reviewed it as another file, even on another
PS on another change, have you not reviewed it? That would
be super cool in some ways, reviews on files that didn't
change would carry over!

..
> 1. Write latency and throughput will probably suck on
> googlesource.com. We already explicitly optimize write
> latency sucking in our current glue code. Possible
> optimization: batch a few seconds' worth of writes up in
> memory in process and trade off durability for fewer
> writes.

If we don't care about latency, then just do it in the
background, one at a time?

...
> 2. These will have giant amounts of history; I would
> probably opt for just squashing, we don't really need an
> audit log here.
>
> All that said, there's no alternative that doesn't involve
> one ref update per bit flip (modulo batching as mentioned
> above). The only alternative that doesn't involve writing
> a couple objects per bit flip is to store these all in
> refs but, remember, many thousands of bits per user. (A
> quick count shows about 7.5M bits for all of
> $large_internal_site.)

Crazy idea: create an All-Users subdirectory with one repo
per user? Then use one ref per sha?

-Martin

--
The Qualcomm Innovation Center, Inc. is a member of Code
Aurora Forum, hosted by The Linux Foundation

Dave Borowitz

unread,
Mar 9, 2016, 6:25:08 PM3/9/16
to Martin Fick, repo-discuss
On Wed, Mar 9, 2016 at 6:19 PM, Martin Fick <mf...@codeaurora.org> wrote:
On Wednesday, March 09, 2016 04:47:40 PM 'Dave Borowitz' via
Repo and Gerrit Discussion wrote:
...
> Shawn's reaction was basically, "sure, but it'd be easier
> to just use a local H2 database." By making the possibly
> overengineered proposal below, I'm not ruling out the
> easy hack.

H2 would not work (well) in a multi-master setup.

Right of course not, but we want to kill ReviewDb before multi-master happens :)

Actually something like native SQL replication is fine, we don't care about lag between sites (a given user will be using one site at a time), we don't even care about ordering. This is more or less the way googlesource.com does it today.

But like I think I said, storing it in git does give us multi-master for free, and feels nicer because all our primary storage is in git.
 
...
> This is a tricky one. It shouldn't go in the change meta
> DAG; this is metadata private to a user, and would be
> really noisy in the history

Agreed, because it really isn't about the change.
..
> This is a notes ref where the SHA-1s are patch set
> revisions.
>
> Contents of a file in the notes ref is like:
> # change,ps,path
> 1,1,path/to/bar.txt
> 1,1,path/to/foo.txt
> 2,3,"path/to/\nbaz.txt"
>
> Include change and patch set because the same SHA-1 can
> show up in multiple projects/branches.

Hmm, what if we decided that reviewed really is per sha?
Afterall, if you reviewed it as another file, even on another
PS on another change, have you not reviewed it?  That would
be super cool in some ways, reviews on files that didn't
change would carry over!

I think that is probably not a good idea when you take reverts into account.
 
..
> 1. Write latency and throughput will probably suck on
> googlesource.com. We already explicitly optimize write
> latency sucking in our current glue code. Possible
> optimization: batch a few seconds' worth of writes up in
> memory in process and trade off durability for fewer
> writes.

If we don't care about latency, then just do it in the
background, one at a time?

This is a tradeoff that doesn't affect the storage format and we can tweak the implementation based on real-world performance.
 
...
> 2. These will have giant amounts of history; I would
> probably opt for just squashing, we don't really need an
> audit log here.
>
> All that said, there's no alternative that doesn't involve
> one ref update per bit flip (modulo batching as mentioned
> above). The only alternative that doesn't involve writing
> a couple objects per bit flip is to store these all in
> refs but, remember, many thousands of bits per user. (A
> quick count shows about 7.5M bits for all of
> $large_internal_site.)

Crazy idea: create an All-Users subdirectory with one repo
per user?  Then use one ref per sha?

Mind blown. I will think through that as we're discussion notedb for accounts.

Dave Borowitz

unread,
Mar 9, 2016, 6:45:16 PM3/9/16
to Martin Fick, repo-discuss
One downside I can think of off the bat is that people already complain about how slow listing all projects is due to ACL checks, so adding several thousand more is not going to win any friends. But maybe we can special-case around that.

Martin Fick

unread,
Mar 9, 2016, 6:50:27 PM3/9/16
to Dave Borowitz, repo-discuss
On Wednesday, March 09, 2016 06:44:49 PM Dave Borowitz
wrote:
> On Wed, Mar 9, 2016 at 6:24 PM, Dave Borowitz
<dbor...@google.com> wrote:
> > On Wed, Mar 9, 2016 at 6:19 PM, Martin Fick
<mf...@codeaurora.org> wrote:
> >> On Wednesday, March 09, 2016 04:47:40 PM 'Dave
> >> Borowitz' via Repo and Gerrit Discussion wrote:
> >> ...
> >>
> >> > 2. These will have giant amounts of history; I would
> >> > probably opt for just squashing, we don't really need
> >> > an
> >> > audit log here.
> >> >
> >> > All that said, there's no alternative that doesn't
> >> > involve one ref update per bit flip (modulo batching
> >> > as mentioned above). The only alternative that
> >> > doesn't involve writing a couple objects per bit
> >> > flip is to store these all in refs but, remember,
> >> > many thousands of bits per user. (A quick count
> >> > shows about 7.5M bits for all of
> >> > $large_internal_site.)
> >>
> >> Crazy idea: create an All-Users subdirectory with one
> >> repo per user? Then use one ref per sha?
> >
> > Mind blown. I will think through that as we're
> > discussion notedb for accounts.
>
> One downside I can think of off the bat is that people
> already complain about how slow listing all projects is
> due to ACL checks, so adding several thousand more is not
> going to win any friends. But maybe we can special-case
> around that.

Another idea would be to hack jgit to have multiple refDbs
for a single repo somehow? It wouldn't be hard to do, but
it might have some bad downsides. For starters, git gc
would prune stuff that it shouldn't. Since the extra refs
would not be visible from git anyway, maybe make the repo
some weird special case only visible from gerrit? Ugh
replication would not work then either (would you want to
replicate the reviewed bit? Not likely),

Martin Fick

unread,
Mar 9, 2016, 6:56:50 PM3/9/16
to repo-d...@googlegroups.com, Dave Borowitz
Another downside of using refs, is that ref deletes are slow
with a packed-refs file. The bigger the file, the slower the
delete... So this might be bad even with a separate repo
per user if the strategy is to use the existence of the ref
as the bit. :(

-Martin

Dave Borowitz

unread,
Mar 9, 2016, 7:00:07 PM3/9/16
to Martin Fick, repo-discuss
I'll bet deleting from a packed-refs file on local disk is faster than writing objects to our storage backend :)

Also deleting (explicitly marking unreviewed) is comparatively much rarer than adding, isn't it?

Martin Fick

unread,
Mar 9, 2016, 7:07:16 PM3/9/16
to Dave Borowitz, repo-discuss
On Wednesday, March 09, 2016 06:59:41 PM Dave Borowitz
Yes. But it can be reaaaaally slow with lots of refs. We
only do major delets during downtime because replicating the
deletes to slaves takes forever.

Uglier hack: what if we extended the batching idea by
storing the unsaved batch in the websession to get
durability before batching it. This would allow much longer
batch times, and it would allow the updates to be visible to
other masters before it is in the git repo (otherwise an
issue with batching for MM). Hmm, google doesn't have
websessions do they?

Dave Borowitz

unread,
Mar 15, 2016, 7:31:32 AM3/15/16
to repo-discuss
We discussed this a bit today in the hackathon.

There are now 4 proposals for how to deal with the reviewed bit. In order of increasing difficulty of implementation:

-Remove the functionality entirely.
-Use a local H2 database.
-Remove the server-side component; persist reviewed bits in browser storage.
-Implement the fully-engineered proposal below.

Whether to remove the functionality (client and/or server) will obviously require some input from users. Thoughts in the room ranged from "I never use this" to "I use this, but only occasionally for large changes," and "I use this sometimes but wouldn't be too sad if it went away."

One previously unforeseen issue was raised with using a local H2 database, which is that if we kill the ReviewDb table, then people depending on SQL replication of their non-H2 database today will lose replication. We can probably work around this with a bit more work by using an interface other than gwtorm to interact with the database, thus achieving our goal of removing the table from the ReviewDb interface. But that may change the difficulty rank of this option relative to the others.

Martin Fick

unread,
Mar 15, 2016, 12:58:25 PM3/15/16
to repo-d...@googlegroups.com, Dave Borowitz
On Tuesday, March 15, 2016 12:31:06 PM 'Dave Borowitz' via
Repo and Gerrit Discussion wrote:
> We discussed this a bit today in the hackathon.
>
> There are now 4 proposals for how to deal with the
> reviewed bit. In order of increasing difficulty of
> implementation:
>
> -Remove the functionality entirely.
> -Use a local H2 database.
> -Remove the server-side component; persist reviewed bits
> in browser storage.
> -Implement the fully-engineered proposal below.

In evaluating the better approach, it might be helpful to
outline why we are trying to remove the DB? My impression
is that there are several answers to that:

1) we don't want 2 indexes
2) it would be nice to have review-data in git to help
enable distributed reviews some day.
3) it would be nice to have review-data in git because of
the built in "history" that git provides
4) it would be nice to have review-data in git because it is
a better long term store (human readable) than a DB for
"archive like" and "version-control like" data.
5) using git seems like a better fit for configuration like
data: project configs, user configs. Using git provides
history and review capabilitites. Using git fits the data
better because the data does not fit well into fixed schemas.
6) it would be nice to not have to manage a DB
7) it would be nice to not have to scale a DB

I am starting to feel like the data here does not benefit
much from being removed from the DB. It only seems to help
with #6 and #7, which I don't think are actually a problem
in practice yet? This data generally seems like a better fit
for a DB than anything else. Using H2 does seem to address
#6 and #6, but it is not MM friendly.

The proposal here seems over engineered with little benefit.
It essentially is user private "usage" data, not something
that benefits from history and being "archived".

Is there a shared embedded DB that we could use instead of
H2. I understand that sqlite is not java friendly, but it
would allow shared usage on NFS I think. Maybe move it to
H2, but make the backend pluggable (so that a MM solution
can be inserted easily)?

Shawn Pearce

unread,
Mar 15, 2016, 1:27:14 PM3/15/16
to Martin Fick, repo-discuss, Dave Borowitz
Yes, I agree this is user private usage data that unlike all other change data has no long term value as a shared property on the change. Therefore it should not be stored in the /meta ref of the change.

Is there a shared embedded DB that we could use instead of
H2.  I understand that sqlite is not java friendly, but it
would allow shared usage on NFS I think.  Maybe move it to
H2, but make the backend pluggable (so that a MM solution
can be inserted easily)?

Since its very small per-user bit values that are quickly read and updated, a database is a better fit than a version controlled history storage system. So I am in favor of putting this into some other system.

I think this should just be a small interface that behaves like a DynamicItem, similar to web session. A plugin should be able to replace the default implementation.

For a simple single server node that the admin just doesn't want to think much about, the default implementation can just be an H2 database on local disk. Similar to the persistent caches.

A multimaster server might run a simple master-master MySQL replication system for this. Consistency isn't that important over this data, and its a very simple table, indexed in only one dimension. Google runs this table on top of a simple master-master Bigtable. The code to glue this table onto Bigtable is only a handful of lines of code; using MySQL vs. H2 should also be just a few lines. Less actually because we could reuse a fair chunk of that for both systems.

We don't even need gwtorm for that table. Its only got a few options. An insert method, a delete method, and 3 lookup functions. A generic base SQL class that the H2 and MySQL ones extend from is easy. And if that is just implementing a simple interface, other multi-master variations are possible.


luca.mi...@gmail.com

unread,
Mar 16, 2016, 3:23:48 AM3/16/16
to Shawn Pearce, Martin Fick, repo-discuss, Dave Borowitz
To me seems very similar to the user's session data, so ... (See inline)

Sent from my iPhone
+2 let's not pollute project /meta 


Is there a shared embedded DB that we could use instead of
H2.  I understand that sqlite is not java friendly, but it
would allow shared usage on NFS I think.  Maybe move it to
H2, but make the backend pluggable (so that a MM solution
can be inserted easily)?

Since its very small per-user bit values that are quickly read and updated, a database is a better fit than a version controlled history storage system. So I am in favor of putting this into some other system.

I think this should just be a small interface that behaves like a DynamicItem, similar to web session. A plugin should be able to replace the default implementation.

Yes, because based on different configurations you may have different acceptable solutions, including a DBMS for instance.

Not in core Gerrit then but rather a core-Gerrit plugin.


For a simple single server node that the admin just doesn't want to think much about, the default implementation can just be an H2 database on local disk. Similar to the persistent caches.

A multimaster server might run a simple master-master MySQL replication system for this. Consistency isn't that important over this data, and its a very simple table, indexed in only one dimension. Google runs this table on top of a simple master-master Bigtable. The code to glue this table onto Bigtable is only a handful of lines of code; using MySQL vs. H2 should also be just a few lines. Less actually because we could reuse a fair chunk of that for both systems.

People running Gerrit in AWS would just stick with MySQL or even a flat file on EBS.


We don't even need gwtorm for that table. Its only got a few options. An insert method, a delete method, and 3 lookup functions. A generic base SQL class that the H2 and MySQL ones extend from is easy. And if that is just implementing a simple interface, other multi-master variations are possible.

Yes, just an interface and that's all :-)



--
--
To unsubscribe, email repo-discuss...@googlegroups.com
More info at http://groups.google.com/group/repo-discuss?hl=en

---
You received this message because you are subscribed to the Google Groups "Repo and Gerrit Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to repo-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dave Borowitz

unread,
Mar 16, 2016, 5:22:35 AM3/16/16
to Luca Milanesio, Shawn Pearce, Martin Fick, repo-discuss
On Wed, Mar 16, 2016 at 8:23 AM, <luca.mi...@gmail.com> wrote:
To me seems very similar to the user's session data, so ... (See inline)

This actually sounds like you might prefer my option #3:

Luca Milanesio

unread,
Mar 16, 2016, 10:01:46 AM3/16/16
to Dave Borowitz, Shawn Pearce, Martin Fick, repo-discuss
On 16 Mar 2016, at 09:22, Dave Borowitz <dbor...@google.com> wrote:



On Wed, Mar 16, 2016 at 8:23 AM, <luca.mi...@gmail.com> wrote:
To me seems very similar to the user's session data, so ... (See inline)

This actually sounds like you might prefer my option #3:
-Remove the server-side component; persist reviewed bits in browser storage.


Why not? It could be one of the choices :-)

If you accept in your company that this information can be easily stored in the browser local storage, you could implement a plugin for that.
For other scenarios, H2 could be good as you don't have and you don't want to manage client state.

It should be pluggable IMHO.

Luca.

Dave Borowitz

unread,
Mar 16, 2016, 10:15:28 AM3/16/16
to Luca Milanesio, Shawn Pearce, Martin Fick, repo-discuss
On Wed, Mar 16, 2016 at 3:01 PM, Luca Milanesio <luca.mi...@gmail.com> wrote:

On 16 Mar 2016, at 09:22, Dave Borowitz <dbor...@google.com> wrote:



On Wed, Mar 16, 2016 at 8:23 AM, <luca.mi...@gmail.com> wrote:
To me seems very similar to the user's session data, so ... (See inline)

This actually sounds like you might prefer my option #3:
-Remove the server-side component; persist reviewed bits in browser storage.


Why not? It could be one of the choices :-)

If you accept in your company that this information can be easily stored in the browser local storage, you could implement a plugin for that.
For other scenarios, H2 could be good as you don't have and you don't want to manage client state.

It should be pluggable IMHO.

That's a little more pluggability than I was thinking of. I'm not sure if what you're proposing is actually technically possible, but it might be.

The thing that I was imagining you implement in a plugin is: what happens _on the server_ when the server receives a call for PUT /changes/X/revisions/Y/files/Z/reviewed? To make it work on the client side you also have to plug in what happens _on the client_ when the reviewed state is triggered.

I guess that's maybe possible, and the default implementation on the client would be "use the REST API", whereas for a client-only plugin the server implementation would be "return 404." But at this point it's a bit more work up front then just swapping out the backend.

IMHO if we go the client-only way then we should just eliminate the server-side component entirely and not worry about making it pluggable.

Still interested in hearing what Shawn and Martin think about the client-only option.

Martin Fick

unread,
Mar 16, 2016, 10:44:51 AM3/16/16
to Dave Borowitz, Luca Milanesio, Shawn Pearce, repo-discuss
On Wednesday, March 16, 2016 03:15:05 PM Dave Borowitz
wrote:
> Still interested in hearing what Shawn and Martin think
> about the client-only option.

This whole feature stems from a google requirement I
believe? How would that requirement be impacted by such a
solution?

What is the motivation to doing it client side? To not have
to come up with a server side solution? If so, are we just
punting until the next similar problem? It seems like a
pluggable server solution would actually be easier to
implement and more full featured, so why consider the client
solution?

Dave Borowitz

unread,
Mar 16, 2016, 10:48:00 AM3/16/16
to Martin Fick, Luca Milanesio, Shawn Pearce, repo-discuss
On Wed, Mar 16, 2016 at 3:44 PM, Martin Fick <mf...@codeaurora.org> wrote:
On Wednesday, March 16, 2016 03:15:05 PM Dave Borowitz
wrote:
> Still interested in hearing what Shawn and Martin think
> about the client-only option.

This whole feature stems from a google requirement I
believe?  How would that requirement be impacted by such a
solution?

What is the motivation to doing it client side?  To not have
to come up with a server side solution?

Yes.
 
If so, are we just punting until the next similar problem?

I guess? We're taking things one step at a time. I don't think the fact of whether or not we remove this server-side feature is going to affect the decisionmaking for other features.

FWIW this is the only table for which we have completely different internal Google glue from the rest of ReviewDb.
 
It seems like a pluggable server solution would actually be easier to
implement and more full featured, so why consider the client
solution?

I think the client-only solution is simpler than the full-git server-side solution, is the main reason. It is probably comparable to the H2 side database implementation in terms of difficulty.

Luca Milanesio

unread,
Mar 16, 2016, 10:52:54 AM3/16/16
to Martin Fick, Dave Borowitz, Shawn Pearce, repo-discuss
The value of off-loading the problem to the client is making it multi-master compatible out-of-the-box.
The drawback is: if you login with another browser or device, you don't see the data at all.

Is that acceptable? I believe for most cases, yes :-)

Should this not be acceptable, having a pluggable backend persistence (H2 or whatever you want) would fit for purpose.

Luca.

Shawn Pearce

unread,
Mar 16, 2016, 1:18:34 PM3/16/16
to Luca Milanesio, Martin Fick, Dave Borowitz, repo-discuss
On Wed, Mar 16, 2016 at 7:52 AM, Luca Milanesio <luca.mi...@gmail.com> wrote:
The value of off-loading the problem to the client is making it multi-master compatible out-of-the-box.

Multi-master is difficult enough out of the box that it isn't just "out of the box" yet. You need other things like web sessions and Git synchronized. By the time you get those setup you can also make sure the AccountPatchReviews is configured.
 
The drawback is: if you login with another browser or device, you don't see the data at all.

That is a _huge_ drawback. I switch devices all of the time doing code reviews and I rely on the AccountPatchReviews flag. And coding the browser version is a horrible difference from pluggable server cases.

Is that acceptable? I believe for most cases, yes :-)

No. No it is not.

Sorry, but it is not. 

Luca Milanesio

unread,
Mar 16, 2016, 4:14:08 PM3/16/16
to Shawn Pearce, Martin Fick, Dave Borowitz, repo-discuss
I Shawn,
I believe I did not express myself clearly enough :-)

On 16 Mar 2016, at 17:18, Shawn Pearce <s...@google.com> wrote:

On Wed, Mar 16, 2016 at 7:52 AM, Luca Milanesio <luca.mi...@gmail.com> wrote:
The value of off-loading the problem to the client is making it multi-master compatible out-of-the-box.

Multi-master is difficult enough out of the box that it isn't just "out of the box" yet. You need other things like web sessions and Git synchronized. By the time you get those setup you can also make sure the AccountPatchReviews is configured.

True, this is equivalent to the problem of sharing Web Sessions across master. One solution is sticking the state on the client, with has the (huge) drawbacks mentioned.

 
The drawback is: if you login with another browser or device, you don't see the data at all.

That is a _huge_ drawback. I switch devices all of the time doing code reviews and I rely on the AccountPatchReviews flag. And coding the browser version is a horrible difference from pluggable server cases.

Yes, in your case it is a huge drawback. I would not like "coding it" in the browser but making ti pluggable. One of the "Server-side" plugin could just get the data and return it back to the client "somehow" (cookie? local storage? other?)

Wouldn't like to have that logic in Gerrit core at all, the core-plugin implementation I would include in Gerrit is a simple H2 DB, server-side.


Is that acceptable? I believe for most cases, yes :-)

No. No it is not.

Sorry, but it is not. 

You are right, I should change my statement to "Would this be acceptable? Sometimes yes, other times not at all".
But again I wouldn't like Gerrit to "harden" that logic in the core, as this should be as pluggable as the web-session mechanism.


Should this not be acceptable, having a pluggable backend persistence (H2 or whatever you want) would fit for purpose.

This was my conclusion :-)

Hope this clarifies.

Luca.

Oswald Buddenhagen

unread,
Mar 18, 2016, 5:57:08 AM3/18/16
to repo-discuss
On Tue, Mar 15, 2016 at 12:31:06PM +0100, 'Dave Borowitz' via Repo and Gerrit Discussion wrote:
> We discussed this a bit today in the hackathon.
>
> There are now 4 proposals for how to deal with the reviewed bit. In order
> of increasing difficulty of implementation:
>
> -Remove the functionality entirely.
> -Use a local H2 database.
> -Remove the server-side component; persist reviewed bits in browser storage.
> -Implement the fully-engineered proposal below.
>
here's an addendum based on my chat with dave one day later (i wasn't
there the day before):
- personally, i couldn't care less if this feature goes away. part of
the reason is the diff-all-on-one-page feature which we
(qt-project.org) use.
- as pointed out later in this thread, client-side is almost as "good"
as non-existent at all: the feature is most useful for big reviews,
and in such cases there is an increased probability of needing to work
cross-device. so that's a non-starter in my estimation.
- my main observation is that unlimited persistence of these review bits
is patently pointless - it would be perfectly reasonable to simply
expire them after a month or something. the consequence is that some
of the proposed solutions (e.g., the ref-based one) may be more
practical than thought before.

Bruce Zu

unread,
Mar 24, 2016, 6:55:52 PM3/24/16
to Repo and Gerrit Discussion
it makes sense. it is not common to review changes half an year ago.
provide configuration for user to decide the lifetime of reviewed records.
default value is available too. 
'dead' records will not be available  or can be get via plugin if exists. 
Reply all
Reply to author
Forward
0 new messages