开发者

How does this strange 32-bit/64-bit interop solution work?

开发者 https://www.devze.com 2022-12-11 02:08 出处:网络
I\'m currently maintaining a piece of software that we outsourced couple of years ago and that is poorly documented. The piece is a COM server for consuming by third-party applications and an installe

I'm currently maintaining a piece of software that we outsourced couple of years ago and that is poorly documented. The piece is a COM server for consuming by third-party applications and an installer that does all necessary deployment.

There's the core compiled as 32-bit DLL and meant to be used from 32-bit applications. And there's also a shim compiled as 64-bit DLL and intended for being used from 64-bit applications. The shim calls CoCreateInstance() to instantiate the core and redirects the calls to the core. The core depends on a huge set of other 32-bit libraries.

The 32-bit core is registered exactly as an in-proc server normally would - there's an entry under HKCR\CLSID that includes the core class id and the path to the library under InprocServer32. The 64-bit shim is registered the same way and also an Application Id is introduced for the 64-bit shim - it is added under HKCR\CLSID and also registered with DCOM - there's an entry in the DCOM console with that Application Id.

Now the DCOM registration looks strange. Why would the shim be registered to DCOM and not the core? I expect that the 32-bit core should be registe开发者_如何学Gored to DCOM to be instantiated in a separate process and shielded from the 64-bit consumer. But apparently it works as it is currently done. What's the sence in registering the 64-bit shim and not the 32-bit core with DCOM?


Recall that you can mix-and-match 32-bit and 64-bit DLLs and processes. The 32-bit core in this case doesn't use DCOM because it can be loaded directly into a host 32-bit process. The 64-bit shim requires DCOM because the developers of it are taking advantage of COM's ability to host the core in a separate process, even on the same machine. That's required since the 32-bit core can't be loaded into a 64-bit host. Using DCOM marshalls all the calls to and from the core, sitting in a separate 32-bit process. It's an optimal arrangement, since calling to the core doesn't then go over DCOM. The developers very likely took advantage of this in their testing, debugging in a 32-bit process, without DCOM getting in the way, until the core was proven to work well enough to try calling it from a 64-bit application.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号