Hi all,
a newbie question here.
I have done a small erlang server following the behavior application, here
https://github.com/daitangio/er_zauker/blob/erlang-24-migration/src/er_zauker_app.erl
To make a long story short, my server scans a set of directories and index files using redis as backend database.
It works well when I runs on small set of files.
But when I run it on a very huge set of files, it seems to "finish" before indexing all the files; when it starts, the client wait until every file is processed and the server can send him a report about the status:
Hi all,
my idea was to able to monitor the execution, but I must explore the gen_server+synchronous call in the future.
I was able to fix the bug following Maria suggestion (thank you Maria!).
The failing processes was dying due a redis timeout, probably because I used a redis MULTI/EXEC transaction which can lead to race conditions on the redis side.
I implemented a small database to track down failing processes and respawing... The idea is only to track down the timeout errors and so I changed the server to match "good" and "timeout" DOWN cases like
...
{'DOWN', Reference, process, _Pid, normal} ->
indexerDaemon(RunningWorker-1,FilesProcessed+1, maps:remove(Reference,MonitorRefMap) );
{'DOWN', Reference, process, Pid, {timeout, Detail}} ->
%% MMMmm we must assume still files to be processed?
#{ Reference := FailedFile } = MonitorRefMap,
io:format("!! Timeout Error on ~p ~n Detail: ~p~n", [FailedFile, {'DOWN', Reference, process, Pid, {timeout, Detail}}]),
% We suppose a timeout error and we push back
% Remove old Reference
UpdatedRefMap=maps:remove(Reference,MonitorRefMap),
NewPid=spawn(er_zauker_util, load_file_if_needed,[FailedFile]),
MonitorRef = erlang:monitor(process,NewPid),
NewRecoveryRefMap=UpdatedRefMap#{ MonitorRef => FailedFile },
indexerDaemon(RunningWorker,FilesProcessed,NewRecoveryRefMap);
I do not know if there is some other smart way of doing it.
Thank you for your hints!!
...