I faced a similar problem, I had to burn a Study to a CD once the study was fully received. Since it is perfectly legal to request additional series to an existing study after it was performed there is not really a sense of “closure”, if you will, for a study, at least from the DICOM point of view.
I solved this problem by adding a timer, since you can at least assume that the whole study will be sent to you at once you can add a timer that checks the arrival time from the last image received, you could work under the assumption that if no additional image for a given study is received after a certain amount of time and no transfer is currently in process it is safe to conclude that no additional images are coming.
We settled with a 5 minutes waiting
time and it works just fine. Be aware that your waiting time,
specially if it is too short could pass because you are still
receiving a large image since you received the previous one, so even
if your time expires you should check if no additional image for that
study is now in transit.
Hope it helps.
OK! i will try to be as specific as possible ...So, a question first ... By "dcm4chee" you mean the dcm4chee pacs server? All the bunldes i have checked, run on an application server. So it is a web application ... right?I do not want to install any jboss or any other application server on the workstation that will host the DICOM Router ...This is a requirement i do not set :( It is set by the project for which i am implementing this tool!The dcm-proxy project is also a web application ... right?So instead of using these tools/web apps i would like to build my own, lightweighted DICOM "server" using dcm4chee toolkit (v2) ... and i am seeking specific information (or i think so :) ) ....As a base i am using dcm4che2 toolkit's "dcmrcv" tool! which is supposed to be a Dicom listener. So i have modified DcmRcv class and added some functionality and the next step is to how to check if a study has completed its send (all dicom file send requests of a study have sent)!So, any guideliness on this?
Thanks!
On Wednesday, 5 November 2014 00:24:16 UTC+2, fleetwoodfc wrote:
We had a small embedded database
(http://hsqldb.org/) where we stored a representation of the study as
the images were arriving, just some basic data like Study Instance
UID and physical disk location where we stored the DICOM files with
an additional date column that was updated each time a new image from
the study arrived to “reset” the timer.
Then we had a
scheduled task (http://quartz-scheduler.org/)
that would scan the database periodically for new studies to burn
checking if the date column was now old enough.
We had 2 more
additional columns, one was a Boolean in_transit or something like
that that we updated to “true” once a new image from that study
began arriving and to “false” once the transit was completed and
the other column was a Boolean that we updated once the study was
processed / burned to “true”
So, basically, the scheduled
task simply had to perform a SQL query to get all studies that were
not in transit right now, that were not processed already and that
had an old enough “last image received” date and call the burning
process for those studies.
We used the source code for the
dcmrcv tool as guide to implement a StoreSCP server that would not
simply receive the images and store them to disk but actually read
the received DICOM tags to create the database record for the study
and perform all other database updates.
Hope it helps
We had a small embedded database (http://hsqldb.org/) where we stored a representation of the study as the images were arriving, just some basic data like Study Instance UID and physical disk location where we stored the DICOM files with an additional date column that was updated each time a new image from the study arrived to “reset” the timer.
Then we had a scheduled task (http://quartz-scheduler.org/) that would scan the database periodically for new studies to burn checking if the date column was now old enough.
We had 2 more additional columns, one was a Boolean in_transit or something like that that we updated to “true” once a new image from that study began arriving and to “false” once the transit was completed and the other column was a Boolean that we updated once the study was processed / burned to “true”
So, basically, the scheduled task simply had to perform a SQL query to get all studies that were not in transit right now, that were not processed already and that had an old enough “last image received” date and call the burning process for those studies.
We used the source code for the dcmrcv tool as guide to implement a StoreSCP server that would not simply receive the images and store them to disk but actually read the received DICOM tags to create the database record for the study and perform all other database updates.
Hope it helps
Now
we are reaching the limits of my memory, but I believe that the
information that you are looking for was located in the dataStream.
dataStream.readDataset()
gives you a DICOMObject and you can obtain the information from
there.
May I suggest you search the forum or ask another
question with that specific query? I am sure somebody else could help
you with that because it has been done probably a lot by now.
Cheers!