gopls/internal/filewatcher: backfill events after watching a directory
Previously, file or subdirectory creations could be missed if they
occurred after a directory was created but before the file watcher was
registered for that directory. This resulted in missed events for all
subsequent operations within that subdirectory.
This change addresses this gap by performing a directory scan
immediately after registering a watch. The scan generates creation
events for all files and subdirectories found. For each subdirectory
discovered, the watcher also attempts to register a watch, ensuring
recursive monitoring. This guarantees that no events are missed.
To maintain a logical event sequence, each FileEvent now includes a
timestamp. Backfilled events for a directory share a timestamp that is
one microsecond after the directory's creation event. Duplicate creation
events are possible but are considered acceptable.
Fixes golang/go#74292
diff --git a/gopls/internal/filewatcher/filewatcher.go b/gopls/internal/filewatcher/filewatcher.go
index 30777e7..2c9852c 100644
--- a/gopls/internal/filewatcher/filewatcher.go
+++ b/gopls/internal/filewatcher/filewatcher.go
@@ -10,17 +10,37 @@
"log/slog"
"os"
"path/filepath"
+ "slices"
"strings"
"sync"
"time"
"github.com/fsnotify/fsnotify"
"golang.org/x/tools/gopls/internal/protocol"
+ "golang.org/x/tools/gopls/internal/util/moremaps"
)
// ErrClosed is used when trying to operate on a closed Watcher.
var ErrClosed = errors.New("file watcher: watcher already closed")
+// stampedEvents holds a collection of file events and their associated timestamps.
+//
+// For events from the fsnotify watcher, each event has its own timestamp,
+// ensuring len(events) == len(stamps).
+// For events from a directory scan, all events share a single timestamp,
+// so len(stamps) is always 1.
+type stampedEvents struct {
+ events []protocol.FileEvent
+ stamps []time.Time
+}
+
+func newStampedEvents() *stampedEvents {
+ return &stampedEvents{
+ events: make([]protocol.FileEvent, 0),
+ stamps: make([]time.Time, 0),
+ }
+}
+
// Watcher collects events from a [fsnotify.Watcher] and converts them into
// batched LSP [protocol.FileEvent]s.
type Watcher struct {
@@ -48,9 +68,17 @@
// watch registrations.
dirCancel map[string]chan struct{}
- // events is the current batch of unsent file events, which will be sent
- // when the timer expires.
- events []protocol.FileEvent
+ // events is the current batch of unsent file events captured by the
+ // fsnotify.Watcher which will be sent when the timer expires.
+ // It is promised that each events will have a corresponding time stamp.
+ events *stampedEvents
+
+ // scannedEvents stores events from directory scans that run after a watch is
+ // registered, capturing files created between directory creation and the watch
+ // becoming active. The key is the directory path. All events from a single
+ // scan share a timestamp set slightly after the directory's creation event
+ // to ensure correct ordering.
+ scannedEvents map[string]*stampedEvents
}
// New creates a new file watcher and starts its event-handling loop. The
@@ -65,11 +93,13 @@
return nil, err
}
w := &Watcher{
- logger: logger,
- watcher: watcher,
- dirCancel: make(map[string]chan struct{}),
- errs: make(chan error),
- stop: make(chan struct{}),
+ logger: logger,
+ watcher: watcher,
+ dirCancel: make(map[string]chan struct{}),
+ errs: make(chan error),
+ stop: make(chan struct{}),
+ events: newStampedEvents(),
+ scannedEvents: make(map[string]*stampedEvents),
}
w.runners.Add(1)
@@ -125,6 +155,14 @@
// fsnotify does not guarantee clean filepaths.
event.Name = filepath.Clean(event.Name)
+ // Assign a timestamp to the event to ensure logical ordering.
+ // Events related to a directory D are ordered as follows:
+ // T0: Directory D creation.
+ // T0 + 1µs: Backfilled creation events for files/subdirectories in D.
+ // T1..TN: File/directory events captured by fsnotify's watcher.
+ // TN+1: Directory D deletion.
+ stamp := time.Now()
+
// fsnotify.Event should not be handled concurrently, to preserve their
// original order. For example, if a file is deleted and recreated,
// concurrent handling could process the events in reverse order.
@@ -142,13 +180,59 @@
// Newly created directories are watched asynchronously to prevent
// a potential deadlock on Windows(see fsnotify/fsnotify#502).
// Errors are reported internally.
- if done, release := w.addWatchHandle(event.Name); done != nil {
- go func() {
- w.errs <- w.watchDir(event.Name, done)
-
+ var recursiveWatch func(path string, done chan struct{}, stamp time.Time, release func())
+ recursiveWatch = func(path string, done chan struct{}, stamp time.Time, release func()) {
+ watchErr := w.watchDir(path, done)
+ defer func() {
// Only release after the error is sent.
+ w.errs <- watchErr
release()
}()
+
+ if watchErr != nil {
+ return
+ }
+
+ // Scan the directory for any entries created since
+ // the directory was created. This scan may report
+ // events that are also captured by fsnotify, but
+ // such duplicates are acceptable and will be
+ // handled by the events consumer.
+ entries, err := os.ReadDir(path)
+ if err != nil {
+ return
+ }
+
+ records := &stampedEvents{
+ events: make([]protocol.FileEvent, 0),
+ stamps: []time.Time{stamp},
+ }
+ for _, entry := range entries {
+ p := filepath.Join(path, entry.Name())
+ records.events = append(records.events, protocol.FileEvent{
+ URI: protocol.URIFromPath(p),
+ Type: protocol.Created,
+ })
+
+ if !entry.IsDir() {
+ continue
+ }
+
+ if done, release := w.addWatchHandle(p); done != nil {
+ // Ensures nested scanned events appear after parent.
+ go recursiveWatch(p, done, stamp.Add(1*time.Microsecond), release)
+ }
+
+ }
+
+ if len(records.events) > 0 {
+ w.addScannedEvent(path, records)
+ }
+ }
+
+ if done, release := w.addWatchHandle(event.Name); done != nil {
+ // Ensures nested scanned events appear after parent.
+ go recursiveWatch(event.Name, done, stamp.Add(1*time.Microsecond), release)
}
case protocol.Deleted:
// Upon removal, we only need to remove the entries from
@@ -160,7 +244,7 @@
}
}
- w.addEvent(*e)
+ w.addEvent(*e, stamp)
timer.Reset(delay)
}
}
@@ -314,11 +398,11 @@
var afterAddHook func(path string, err error)
-// addWatchHandle registers a new directory watch.
-// The returned 'done' channel should be used to signal cancellation of a
-// pending watch, the release function should be called once watch registration
-// is done.
-// It returns nil if the watcher is already closing.
+// addWatchHandle acquires a watch handle for a directory path.
+// It returns a 'done' channel for cancellation and a 'release' function to be
+// called when the watch operation is complete.
+// It returns nil if a watch is already pending for the path or if the watcher
+// is closing, indicating the client should not attempt to watch the directory.
func (w *Watcher) addWatchHandle(path string) (done chan struct{}, release func()) {
w.mu.Lock()
defer w.mu.Unlock()
@@ -327,6 +411,10 @@
return nil, nil
}
+ if _, ok := w.dirCancel[path]; ok {
+ return nil, nil
+ }
+
done = make(chan struct{})
w.dirCancel[path] = done
@@ -335,12 +423,17 @@
return done, w.watchers.Done
}
-// removeWatchHandle removes the handle for a directory watch and cancels any
-// pending watch attempt for that path.
+// removeWatchHandle clears the pending watch for a path, canceling the operation
+// and allowing a new handle to be acquired by addWatchHandle.
+// It is safe to call this multiple times.
func (w *Watcher) removeWatchHandle(path string) {
w.mu.Lock()
defer w.mu.Unlock()
+ if w.dirCancel == nil { // file watcher is closing.
+ return
+ }
+
if done, ok := w.dirCancel[path]; ok {
delete(w.dirCancel, path)
close(done)
@@ -357,31 +450,68 @@
return isDir
}
-func (w *Watcher) addEvent(event protocol.FileEvent) {
+func (w *Watcher) addEvent(event protocol.FileEvent, stamp time.Time) {
w.mu.Lock()
defer w.mu.Unlock()
-
- // Some systems emit duplicate change events in close
- // succession upon file modification. While the current
- // deduplication is naive and only handles immediate duplicates,
- // a more robust solution is needed.
+ // Some systems emit duplicate change events in close succession upon file
+ // modification. While the current deduplication is naive and only handles
+ // immediate duplicates, a more robust solution is needed.
//
// TODO(hxjiang): Enhance deduplication. The current batching of
// events means all duplicates, regardless of proximity, should
// be removed. Consider checking the entire buffered slice or
// using a map for this.
- if len(w.events) == 0 || w.events[len(w.events)-1] != event {
- w.events = append(w.events, event)
+ if len(w.events.events) == 0 || w.events.events[len(w.events.events)-1] != event {
+ w.events.events = append(w.events.events, event)
+ w.events.stamps = append(w.events.stamps, stamp)
}
}
-func (w *Watcher) drainEvents() []protocol.FileEvent {
+func (w *Watcher) addScannedEvent(path string, records *stampedEvents) {
w.mu.Lock()
- events := w.events
- w.events = nil
+ defer w.mu.Unlock()
+
+ w.scannedEvents[path] = records
+}
+
+// drainEvents collects and merges pending events from directory scans and the
+// fsnotify watcher, returning them sorted chronologically.
+func (w *Watcher) drainEvents() []protocol.FileEvent {
+ // Copy fsnotify reported events and back filling events, reset the fields
+ // then release the lock.
+ w.mu.Lock()
+ watchedEvents := w.events
+ w.events = newStampedEvents()
+
+ scanned := w.scannedEvents
+ w.scannedEvents = make(map[string]*stampedEvents)
w.mu.Unlock()
- return events
+ // Merge the scanned events and fsnotify watched events.
+ sorted := moremaps.ValueSlice(scanned)
+ slices.SortFunc(sorted, func(a, b *stampedEvents) int {
+ return a.stamps[0].Compare(b.stamps[0])
+ })
+
+ var merged []protocol.FileEvent
+ var i, j int
+
+ for j < len(sorted) && i < len(watchedEvents.events) {
+ if watchedEvents.stamps[i].After(sorted[j].stamps[0]) {
+ merged = append(merged, sorted[j].events...)
+ j++
+ } else {
+ merged = append(merged, watchedEvents.events[i])
+ i++
+ }
+ }
+
+ merged = append(merged, watchedEvents.events[i:]...)
+ for j < len(sorted) {
+ merged = append(merged, sorted[j].events...)
+ j++
+ }
+ return merged
}
// Close shuts down the watcher, waits for the internal goroutine to terminate,
diff --git a/gopls/internal/filewatcher/filewatcher_test.go b/gopls/internal/filewatcher/filewatcher_test.go
index 80abf95..1060fc4 100644
--- a/gopls/internal/filewatcher/filewatcher_test.go
+++ b/gopls/internal/filewatcher/filewatcher_test.go
@@ -172,6 +172,7 @@
expectedEvents: []protocol.FileEvent{
{URI: "foo", Type: protocol.Deleted},
{URI: "baz", Type: protocol.Created},
+ {URI: "baz/bar.go", Type: protocol.Created},
},
},
{
@@ -186,6 +187,7 @@
},
expectedEvents: []protocol.FileEvent{
{URI: "baz", Type: protocol.Created},
+ {URI: "baz/bar.go", Type: protocol.Created},
{URI: "foo", Type: protocol.Deleted},
},
},
@@ -195,19 +197,43 @@
watchErrorPath: "foo",
changes: func(root string, errs chan error) error {
// Prepare a dir with with broken symbolic link.
- // foo <- 1st
- // └── from.go -> root/to.go <- 1st
- tmp := filepath.Join(t.TempDir(), "foo")
- if err := os.Mkdir(tmp, 0755); err != nil {
- return err
- }
- from := filepath.Join(tmp, "from.go")
+ // foo <- 1st
+ // ├── bar <- 1st
+ // │ ├── bar.go <- 1st
+ // │ └── baz <- 1st
+ // │ └── baz.go <- 1st
+ // └── from.go -> root/to.go <- 1st
+ tmp := filepath.Join(t.TempDir(), "foo")
+
+ from := filepath.Join(tmp, "from.go")
to := filepath.Join(root, "to.go")
- // Create the symbolic link to a non-existing file. This would
- // cause the watch registration to fail.
- if err := os.Symlink(to, from); err != nil {
- return err
+ {
+ if err := os.Mkdir(tmp, 0755); err != nil {
+ return err
+ }
+
+ // Create the symbolic link to a non-existing file. This would
+ // cause the watch registration to fail.
+ if err := os.Symlink(to, from); err != nil {
+ return err
+ }
+
+ if err := os.Mkdir(filepath.Join(tmp, "bar"), 0755); err != nil {
+ return err
+ }
+
+ if err := os.WriteFile(filepath.Join(tmp, "bar", "bar.go"), []byte("package main"), 0644); err != nil {
+ return err
+ }
+
+ if err := os.Mkdir(filepath.Join(tmp, "bar", "baz"), 0755); err != nil {
+ return err
+ }
+
+ if err := os.WriteFile(filepath.Join(tmp, "bar", "baz", "baz.go"), []byte("package main"), 0644); err != nil {
+ return err
+ }
}
// Move the directory containing the broken symlink into place
@@ -219,6 +245,10 @@
// root
// ├── foo <- 2nd (Move)
+ // | ├── bar <- 2nd (Move)
+ // | │ ├── bar.go <- 2nd (Move)
+ // | │ └── baz <- 2nd (Move)
+ // | │ └── baz.go <- 2nd (Move)
// │ ├── from.go -> ../to.go <- 2nd (Move)
// │ └── foo.go <- 4th (Create)
// └── to.go <- 3rd (Create)
@@ -261,10 +291,14 @@
},
expectedEvents: []protocol.FileEvent{
{URI: "foo", Type: protocol.Created},
- // TODO(hxjiang): enable this after implementing retrospectively
- // generate create events.
- // {URI: "foo/from.go", Type: protocol.Created},
{URI: "to.go", Type: protocol.Created},
+ // Backfilled events scanned after watch registration, from top to bottom.
+ {URI: "foo/bar", Type: protocol.Created},
+ {URI: "foo/from.go", Type: protocol.Created},
+ {URI: "foo/bar/bar.go", Type: protocol.Created},
+ {URI: "foo/bar/baz", Type: protocol.Created},
+ {URI: "foo/bar/baz/baz.go", Type: protocol.Created},
+ // Incoming event after watch is established.
{URI: "foo/foo.go", Type: protocol.Created},
},
},
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
// For events from the fsnotify watcher, each event has its own timestamp,
// ensuring len(events) == len(stamps).
// For events from a directory scan, all events share a single timestamp,
// so len(stamps) is always 1.
Why this inconsistency? Why not ensure that len(stamps) == len(events), duplicating the timestamp if necessary?
stamps []time.Time
I think 'timeStamps' is clearer, perhaps?
events: make([]protocol.FileEvent, 0),
stamps: make([]time.Time, 0),
Why do we need to create empty slices here? Why are we differentiating from nil?
It seems that newStampedEvents could just be replaced by new(stampedEvents)...
w.errs <- watchErr
Why does this channel send need to be in the defer?
entries, err := os.ReadDir(root)
Why do we read our own entries, rather than use something like filepath.WalkDir?
How does this behave with symlinks? If I have a symlink loop, will this run forever? Maybe add a test.
records := &stampedEvents{
events: make([]protocol.FileEvent, 0),
stamps: []time.Time{stamp},
}
Especially with a literal here, let's remove the constructor. IMO a constructor should not be optional.
And per previous note, I don't know why we're differentiating an empty, allocated slice rather than just relying on the zero value nil slice.
records.events = append(records.events, protocol.FileEvent{
URI: protocol.URIFromPath(p),
Type: protocol.Created,
})
I don't understand the stampedEvents API. Should we have a timestamp for every entry? Why do we need timestamps?
w.addScannedEvent(root, records)
The name of this function is 'addScannedEvent', but it accepts records. Is this a stale name?
func (w *Watcher) drainEvents() []protocol.FileEvent {
'flush' is a common term for this. How about 'flushEvents'?
// Merge the scanned events and fsnotify watched events.
I guess I don't understand why we need the scannedEvents map at all. Why do we need separate accounting for scanned events, rather than just adding them to the events map? Is this just all in service of keeping the sorted slice small?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
// For events from the fsnotify watcher, each event has its own timestamp,
// ensuring len(events) == len(stamps).
// For events from a directory scan, all events share a single timestamp,
// so len(stamps) is always 1.
Why this inconsistency? Why not ensure that len(stamps) == len(events), duplicating the timestamp if necessary?
Done. I have changed the struct doc. The len(stamps) must equals to len(events). Also, a method addEvent(event, timeStamp).
I think 'timeStamps' is clearer, perhaps?
Done
events: make([]protocol.FileEvent, 0),
stamps: make([]time.Time, 0),
Why do we need to create empty slices here? Why are we differentiating from nil?
It seems that newStampedEvents could just be replaced by new(stampedEvents)...
Removed the constructor.
Why does this channel send need to be in the defer?
Added comment
entries, err := os.ReadDir(root)
Why do we read our own entries, rather than use something like filepath.WalkDir?
How does this behave with symlinks? If I have a symlink loop, will this run forever? Maybe add a test.
I have chanegd it to WalkDir but I only walk the top most level dir. There is a reason. Please also read the other long comment below.
First we need to establish an agreement, we can only scan a dir after the watch is registered. This make sure we will never miss any events. We may have duplicate events but I think it's accetapble.
Also, we know watching for a dir may take longer than expecting. We have a logic to keep trying until success.
But `filepath.WalkDir` walk through dir in sequential. So it will not move forward to the next dir until the current dir is finished. We can make it a separate go routine to handle the registration, which is basically the current implementation.
The current implementation is, go thought all the entries, if it is dir, spin up a go routin to watch for it, scan it, if there is sub dir, spin up a go routine...
records := &stampedEvents{
events: make([]protocol.FileEvent, 0),
stamps: []time.Time{stamp},
}
Especially with a literal here, let's remove the constructor. IMO a constructor should not be optional.
And per previous note, I don't know why we're differentiating an empty, allocated slice rather than just relying on the zero value nil slice.
Done. I'm not sure if I understand your statement correctly. do you think it is ok to leave it without constructor.
use the default `new(stampedEvent)`
records.events = append(records.events, protocol.FileEvent{
URI: protocol.URIFromPath(p),
Type: protocol.Created,
})
I don't understand the stampedEvents API. Should we have a timestamp for every entry? Why do we need timestamps?
Done. But they will share the same timestamp.
The name of this function is 'addScannedEvent', but it accepts records. Is this a stale name?
This is not a stale name. I have changed it to events.
I use records because I thought it's easier to understand.
scanned events is being kept in a map mapping from dir path to scanned events. Those events will be merged with fsnotify events when the timer times up.
func (w *Watcher) drainEvents() []protocol.FileEvent {
'flush' is a common term for this. How about 'flushEvents'?
Done
// Merge the scanned events and fsnotify watched events.
I guess I don't understand why we need the scannedEvents map at all. Why do we need separate accounting for scanned events, rather than just adding them to the events map? Is this just all in service of keeping the sorted slice small?
It's not meant for keeping the sorted slice small. Let me give you an example and I will walk you through it, time will be demonstrated by T0 T1 T2....
```
- root <--- T0
|-- foo <--- T1
|. |-- foo0.go <--- T2
|. |-- foo1.go <--- T3
|. |-- bar. <--- T4
|. |. |-- bar0.go <--- T5
|. |. |-- bar1.go <--- T6
Tx is the time when the dir or file get created
```
T0, root created, fsnotify told us. We start try to watch for root, we try a few times and we are able to watch for it after T6. But every file and dirs are already created and we have no idea when they are created. We still scan the root dir so we find out there are dir foo.
Thats why we assign a value to it. We use `T0 + 1 Microsecond`. This make sure all the scanned events under a dir is after the creation events of the dir.
We recursively do it, so and the end, we will have the following events with time
```
- root <--- T0
|-- foo <--- T0 + 1 micro second
|. |-- foo0.go <--- T0 + 2 micro second
|. |-- foo1.go <--- T0 + 2 micro second
|. |-- bar. <--- T0 + 2 micro second
|. |. |-- bar0.go <--- T0 + 3 micro second
|. |. |-- bar1.go <--- T0 + 3 micro second
```
Even though we have no idea when those file are created, we do know they should be some time after their parent dir got created. The + 1 microsecond makes sure of that.
Why 1 microsecond?
I do not have a very good reason behind of this value. But I think any small value should work. 1 microsecond is also very convenient for testing because the test wil make sure these events are right after the parent dir.
And internally, when we are storing these events, we have two choices, but keep in mind, fsnotify is still sending us new events all the time and maybe one of them is the root dir deletion! (say `Tf` is the time root dir get deleted)
This CL uses the second approaches.
This brings another reason why `+1 microsecond`. If the `Tf` is the time root get deleted, by having the `+1 microsecond`, this make sure the subfile subdir creation events for root dir is definitely before the deletion events of the root dir.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Offline discussion is also welcome. :D
Sorry for the slow response. This is indeed very complicated, and I didn't have time to think about it until now!
It's not clear to me what we need to guarantee. Maybe writing down invariants would be helpful. It seems like we should guarantee that the sequence of events is eventually consistent.
Let's consider some simple cases to make discussion easier.
Let's say we get an add(dir), delete(dir/subdir) from fsnotify. When we get the add(dir), we scan and get some state of dir/subdir (who knows if it's there or not). How can we guarantee that this is consistent?
It feels like we can't know whether the delete(dir/subdir) event from fsnotify is logically before or after our scan of dir/subdir, so we need to ensure we scan again after we get the delete(dir/subdir).
I think offline discussion would help us sort this out faster 😊
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
entries, err := os.ReadDir(root)
Hongxiang JiangWhy do we read our own entries, rather than use something like filepath.WalkDir?
How does this behave with symlinks? If I have a symlink loop, will this run forever? Maybe add a test.
I have chanegd it to WalkDir but I only walk the top most level dir. There is a reason. Please also read the other long comment below.
First we need to establish an agreement, we can only scan a dir after the watch is registered. This make sure we will never miss any events. We may have duplicate events but I think it's accetapble.
Also, we know watching for a dir may take longer than expecting. We have a logic to keep trying until success.
But `filepath.WalkDir` walk through dir in sequential. So it will not move forward to the next dir until the current dir is finished. We can make it a separate go routine to handle the registration, which is basically the current implementation.
The current implementation is, go thought all the entries, if it is dir, spin up a go routin to watch for it, scan it, if there is sub dir, spin up a go routine...
Hi Rob, I have made corresponding changes to the code after our last discussion. I have change the code to use `filepath.WalkDir`. But `filepath.WalkDir` by default use DFS strategy, the scan logic should follow BFS.
Why BFS?
When scan a dir, we need to add all the entries to the events and try to watch for entries that is dir. So we can't walk the entire tree, instead we need to only walk the current dir.
Test for symbolic link.
I think we need to have a test for it. I will add a TODO so this CL will not keep growing.
records := &stampedEvents{
events: make([]protocol.FileEvent, 0),
stamps: []time.Time{stamp},
}
Hongxiang JiangEspecially with a literal here, let's remove the constructor. IMO a constructor should not be optional.
And per previous note, I don't know why we're differentiating an empty, allocated slice rather than just relying on the zero value nil slice.
Done. I'm not sure if I understand your statement correctly. do you think it is ok to leave it without constructor.
use the default `new(stampedEvent)`
The `stampedEvent ` is deleted.
After offline discussion, there are a few things we have agreed, there are two kinds of events:
We trust the events captured by fsnotify will arrive in a logical order. So we should insert the synthesized events right after the fsnotify captured events.
If `CN` is a dir creation event, then we will trigger a scan to synthesize creation event. All those events should be inserted right after `CN`. From the file watcher consumer perspective, the entire tree `CN` is created atomically.
```
- C1 C2 C3 ... CN ... (fsnotify captured)
|
S1 S2 S3 ... (synthesized)
file watcher consumer sees:
NOTE:
file watcher have a concept of timely flush. So it is possible that synthesized events maybe distributed across two flush or multiple flushes.
if unfortunately, the synthesized did not make it to the current flush containing CN, those events will be the first in the next flush.
``` file watcher consumer sees:
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
// Merge the scanned events and fsnotify watched events.
Resolved. There is another very very small edge case. That case will confuse the client. I have left a TODO.
The case is, if the watch for a dir is registered, but flush happens before we synthesized all the creation event for that dir's entries. Then, those synthesized events will be sent in the next flush which may appear after fsnotify captured events. This is very trick case. I think the only solution is to make watch and synthesize atomic.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |