Running ECHO SCP in a background

All other questions regarding DCMTK

Moderator: Moderator Team

Post Reply
Message
Author
Masato
Posts: 31
Joined: Fri, 2009-03-13, 01:41
Location: United States

Running ECHO SCP in a background

#1 Post by Masato »

Hello,

I am developing a DICOM application that implements ECHO SCP in the background. In other words, I ran the ECHO SCP process in a different thread. The remote application can send the ECHO RQ and my application can successfully detect it and send the ECHO -RSP for the caller.

However, when the program terminates, there is a memory leak. I think the network object that has been created during ECHO SCP process; is not successfully deleted. To be precise ASC_dropNetwork() is never called.

I am developing in VC++ platform that uses MFC. The program has been developed using DCMTK's network tool "storescp" as a basis.

How can I solve this problem ?

Thank you,
Regards

Michael Onken
DCMTK Developer
Posts: 2048
Joined: Fri, 2004-11-05, 13:47
Location: Oldenburg, Germany
Contact:

#2 Post by Michael Onken »

Hi,

does the memory leak also occur when using storescp in single or multiprocess mode? storescp seems to call ASC_dropNetwork(), so I'm not sure why you don't call it yourself in the thread if you finnish work on that echo network connection?

Regards,
Michael

Masato
Posts: 31
Joined: Fri, 2009-03-13, 01:41
Location: United States

#3 Post by Masato »

Thanks for the quick response.

I think the memory leak does not occur in a single mode process. It would be good to call the ASC_dropNetwork() in the calling thread but the problem lies in the following code.

In storescp program, the network object is created and the process waits infinitely using while loop as follows :

.....

Code: Select all

while (cond.good())
  {
    /* receive an association and acknowledge or reject it. If the association was */
    /* acknowledged, offer corresponding services and invoke one or more if required. */
    cond = acceptAssociation(net, asccfg);

    /* remove zombie child processes */
    cleanChildren(-1, OFFalse);
#ifdef WITH_OPENSSL
    /* since storescp is usually terminated with SIGTERM or the like,
     * we write back an updated random seed after every association handled.
     */
    if (tLayer && opt_writeSeedFile)
    {
      if (tLayer->canWriteRandomSeed())
      {
        if (!tLayer->writeRandomSeed(opt_writeSeedFile))
        {
          CERR << "Error while writing random seed file '" << opt_writeSeedFile << "', ignoring." << 

endl;
        }
      }
      else
      {
        CERR << "Warning: cannot write random seed, ignoring." << endl;
      }
    }
#endif
    // if running in inetd mode, we always terminate after one association
    if (dcmExternalSocketHandle.get() >= 0){
		break;
	}

    // if running in multi-process mode, always terminate child after one association
    if (DUL_processIsForkedChild()){
		break;
	}

  }

/* drop the network, i.e. free memory of T_ASC_Network* structure. This call */
  /* is the counterpart of ASC_initializeNetwork(...) which was called above. */
  cond = ASC_dropNetwork(&net);
  if (cond.bad())
  {
    DimseCondition::dump(cond);
    return 1;
...

So the ASC_dropNetwork() is called only after the while loop. So when this code is executed from the different thread and the main program terminates, there is no way that ASC_dropNetwork() is called.

May be I have been implementing in a wrong way ?


Thank you.
Regards,

Post Reply

Who is online

Users browsing this forum: Ahrefs [Bot] and 1 guest