-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for writing received DICOM content to Writable streams #41
base: master
Are you sure you want to change the base?
Conversation
} | ||
|
||
const stream = this.dimseStream || this.dimseStoreStream; | ||
stream.write(pdv.getValue()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it should check the return value of write(). If it is not true, then we need to apply backpressure and until the drain event to resume. https://nodejs.org/docs/latest/api/stream.html#event-drain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch! Thank you Kinson! The only thing that bothers me is how to wait in here for the drain
event... any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any ideas on this @kinsonho?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PantelisGeorgiadis sorry for the late reply.
A pattern that I saw others doing is the following:
if(!stream.write(pdv.getValue())) {
await new Promise(resolve => stream.once('drain', resolve));
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @kinsonho! The async/await pattern is not used in this library. Do you see any drawbacks in start using it (e.g. compatibility, etc.)?
With createStoreWriteableStream(), the result is just the dataset, not a full P10 object, right? For createDatasetFromStoreWriteableStream(), if I would like to have a filtered dataset to subsequent processing after the cStoreRequest, then I will need to split the StoreWriteableStream and create another accumulator writer stream to collect the attributes that I need. Then pass to the createDatasetFromStoreWritableStream(). Correct? |
Thanks for working on this. Just some quick comments when reviewing the changes for now. I will try it out later. |
You are right Kinson. The result accumulated in the writable created by Regarding, the "need to split the StoreWriteableStream and create another accumulator writer stream to collect the attributes", I don't think that this is nessesary now that the const { Writable } = require('stream');
class StreamingScp extends Scp {
constructor(socket, opts) {
super(socket, opts);
this.association = undefined;
}
associationRequested(association) {
this.association = association;
const contexts = association.getPresentationContexts();
contexts.forEach((c) => {
const context = association.getPresentationContext(c.id);
if (Object.values(StorageClass).includes(context.getAbstractSyntaxUid())) {
const transferSyntaxes = context.getTransferSyntaxUids();
transferSyntaxes.forEach((transferSyntax) => {
if (
transferSyntax === TransferSyntax.ImplicitVRLittleEndian ||
transferSyntax === TransferSyntax.ExplicitVRLittleEndian
) {
context.setResult(PresentationContextResult.Accept, transferSyntax);
} else {
context.setResult(PresentationContextResult.RejectTransferSyntaxesNotSupported);
}
});
} else {
context.setResult(PresentationContextResult.RejectAbstractSyntaxNotSupported);
}
});
this.sendAssociationAccept();
}
createStoreWritableStream(acceptedPresentationContext) {
// Create a custom Writable that will handle the incoming chunks.
return new StreamingWritable(acceptedPresentationContext);
}
createDatasetFromStoreWritableStream(writable, acceptedPresentationContext, callback) {
// At this point, if there is no interest for the full Dataset that includes pixel data
// (because it was written to a file or uploaded to the cloud),
// just return the metadata dataset to the Scp.cStoreRequest handler (could also be undefined).
callback(writable.getMetadataDataset());
}
cStoreRequest(request, callback) {
console.log(request.getDataset());
const response = CStoreResponse.fromRequest(request);
response.setStatus(Status.Success);
callback(response);
}
associationReleaseRequested() {
this.sendAssociationReleaseResponse();
}
}
class StreamingWritable extends Writable {
constructor(acceptedPresentationContext, options) {
super(options);
this.metadataDataset = undefined;
this.shouldParse = true;
this.acceptedPresentationContext = acceptedPresentationContext;
}
_write(chunk, encoding, callback) {
if (this.shouldParse) {
// First write occurred. There's a good chance to have the complete
// metadata, depending on the max PDU value.
// Try to parse the chunk, up to the pixel data and create a Dataset.
this.metadataDataset = new Dataset(
chunk,
this.acceptedPresentationContext.getAcceptedTransferSyntaxUid(),
{
ignoreErrors: true,
untilTag: '7FE00010', // PixelData
includeUntilTagValue: false,
}
);
this.shouldParse = false;
}
// Evaluate the metadata Dataset (this.metadataDataset) and decide the proper course of action.
// i.e. Write/append the chunk to a file, upload it to the cloud or just accumulate it in memory.
// doStuffWithChunk(chunk)
// Call the callback to notify that you have finished working on this chunk.
callback(/* new Error ('Something went wrong during chunk handling') */);
}
_final(callback) {
// At this point no other chunk will be received.
// Call the callback to notify that you have finished receiving chunks and free resources.
callback();
}
// Returns the parsed metadata dataset.
getMetadataDataset() {
return this.metadataDataset;
}
} |
@kinsonho, @richard-viney did you have the chance to test this? Any feedback? |
Sorry for the long delay. I am planning to work on it this week. I will let
you know.
…On Sun, May 28, 2023 at 3:40 PM Pantelis Georgiadis < ***@***.***> wrote:
@kinsonho <https://github.com/kinsonho>, @richard-viney
<https://github.com/richard-viney> did you have the chance to test this?
Any feedback?
—
Reply to this email directly, view it on GitHub
<#41 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3KBIDMFKNOHYOYXVERZV3XIOSZXANCNFSM6AAAAAAV7S36HY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
4f3adf3
to
9802f86
Compare
9802f86
to
d8ee483
Compare
Sorry for the big delay, this looks good and we'll be testing it out next week. The branch is now in conflict with the master branch but we can sort that out no problem. |
4a2a295
to
ba049ac
Compare
@richard-viney I just rebased to latest master. My only concern is that we still haven't handled Kinson's comment regarding the stream |
e37ec23
to
e092f83
Compare
Yes the drain comment is correct. In practice if streaming is going to disk then the incoming network data will most likely be at a slower pace than local disk writes, so you'd get away with it, but for other use cases it could become an issue. I haven't yet looked at the implications for the library/protocol of adding a wait on the stream here. |
Would it be possible to pass the |
I've implemented the above on a fork and it seems to work. Another couple of things have come up too:
|
I've got this working, ended up writing a custom DICOM P10 header prior to streaming the rest to a temporary file, which results in a valid DICOM P10. Still more testing to be done though. My previous comment about making IMO the most important change to add to this branch is making the C-STORE request available, which I did in a fork here: HeartLab@4192ec9. The only other issue was regarding Thanks. |
5e0e9e7
to
2b858dc
Compare
Great news @richard-viney! Thanks for working on this! I incorporated your changes to the working branch and made an unsuccessful effort to handle stream backpressure, which I reverted… ☹ |
2b858dc
to
4b414fd
Compare
An attempt was made to implement C-STORE SCP streaming support, based on the ideas discussed in #39.
More specifically, the
Scp
class was enriched with two extra functions that give the caller the oppurtunity to create the stream that will receive the incoming fragment buffers and to translate the accumulated fragments back into aDataset
.The functions are the
createStoreWritableStream
and thecreateDatasetFromStoreWritableStream
, respectively. The first one provides to the user the accepted presentation context (to assess the dataset type and transfer syntax) and expects a Writable stream to be returned. The latter provides to the user the previously created Writable stream, the accepted presentation context and callback that could be used to return the parsedDataset
. The default implementation still uses memory buffers.Bellow is a sample implementation of an
Scp
class that accumulates the incoming C-STORE data to temp file streams (using thetemp
library). Once the reception is over (last PDV fragment received), the files are read back into aDataset
which are passed to theScp.cStoreRequest
method for further processing (this is not a mandatory step - the callback can return undefined if there is no interest for passing the dataset in theScp.cStoreRequest
method).@richard-viney, @kinsonho, please review and let me know of your thoughts.