I’m trying to create a simple Win32 DLL. As interface between DLL and EXE I use C functions, but inside of DLL i use C++ singleton object. Following is an example of my DLL implementation:
// MyDLLInterface.cpp file --------------------
#include "stdafx.h"
#include <memory>
#include "MyDLLInterface.h"
class MySingleton开发者_如何转开发
{
friend class std::auto_ptr< MySingleton >;
static std::auto_ptr< MySingleton > m_pInstance;
MySingleton()
{
m_pName = new char[32];
strcpy(m_pName, “MySingleton”);
}
virtual ~ MySingleton()
{
delete [] m_pName;
}
MySingleton(const MySingleton&);
MySingleton& operator=(const MySingleton&);
public:
static MySingleton* Instance()
{
if (!m_pInstance.get())
m_pInstance.reset(new MySingleton);
return m_pInstance.get();
}
static void Delete()
{
m_pInstance.reset(0);
}
void Function() {}
private:
char* m_pName;
};
std::auto_ptr<MySingleton> MySingleton::m_pInstance(0);
void MyInterfaceFunction()
{
MySingleton::Instance()->Function();
}
void MyInterfaceUninitialize()
{
MySingleton::Delete();
}
// MyDLLInterface.h file --------------------
#if defined(MY_DLL)
#define MY_DLL_EXPORT __declspec(dllexport)
#else
#define MY_DLL_EXPORT __declspec(dllimport)
#endif
MY_DLL_EXPORT void MyInterfaceFunction();
MY_DLL_EXPORT void MyInterfaceUninitialize();
The problem or the question that i have is following: If i don't call MyInterfaceUninitialize() from my EXEs ExitInstance(), i have a memory leak (m_pName pointer). Why does it happening? It looks like the destruction off MySingleton happens after EXEs exit. Is it possible to force the DLL or EXE to destroy the MySingleton a little bit earlier, so I don't need to call MyInterfaceUninitialize() function?
EDIT: Thanks for all your help and explanation. Now i understand that this is a design issue. If i want to stay with my current solution, i need to call MyInterfaceUninitialize() function in my EXE. If i don't do it, it also OK, because the singleton destroys itself, when it leaves the EXE scope (but i need to live with disturbing debugger messages). The only way to avoid this behavior, is to rethink the whole implementation.
I can also set my DLL as "Delay Loaded DLLs" under Linker->Input in Visual Studio, to get rid of disturbing debugger messages.
If i don't call MyInterfaceUninitialize() from my EXEs ExitInstance(), i have a memory leak (m_pName pointer). Why does it happening?
This is not a leak, this is the way auto_ptr
s are supposed to work. They release the instance when they go out of scope (which in your case is when the dll is unloaded).
It looks like the destruction off MySingleton happens after EXEs exit.
Yes.
Is it possible to force the DLL or EXE to destroy the MySingleton a little bit earlier, so I don't need to call MyInterfaceUninitialize() function?
Not without calling this function.
You can take advantage of the DllMain callback function to take appropriate action when the DLL is loaded/unloaded or a process/thread attaches/detaches. You could then allocate objects per attached process/thread instead of using a singleton since this callback function is executed in the context of the attached thread. With that in mind, also take a look at Thread Local Storage (TLS).
Honestly, for the example you gave, it doesn't matter if you call the Uninitialize method from your ExitInstance. Yes, the debugger will complain about the unreleased memory, but then again, it's a singleton, it's intended to live for an extended duration.
Only if you have some state information in the DLL that needs to be persisted at exit, or if you are dynamically loading/unloading DLLs multiple times, do you need to be diligent about cleaning up. Otherwise, just letting the OS tear down the process at exit is just fine, the reported memory leak is inconsequential at that point.
精彩评论