开发者

Nvidia Information Disclosure / Memory Vulnerability on Linux and General OS Memory Protection

开发者 https://www.devze.com 2023-02-05 14:08 出处:网络
I thought this was expected behavior? From: http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7675-1380-00.htm

I thought this was expected behavior?

From: http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7675-1380-00.htm

Paraphrased summary: "Working on the Linux port we found that cudaHostAlloc/cuMemHostAlloc CUDA API calls return un-initialized pinned memory. This hole may potentially allow one to examine regions of memory previously used by other programs and Linux kernel. We recommend everybody to stop running CUDA drivers on a开发者_运维问答ny multiuser system."

My understanding was that "Normal" malloc returns un-initialized memory, so I don't see what the difference here is...

The way I understand how memory allocation works would allow the following to happen:

-userA runs a program on a system that crunches a bunch of sensitive information. When the calculations are done, the results are written to disk, the processes exits, and userA logs off.

-userB logs in next. userB runs a program that requests all available memory in the system, and writes the content of his un-initialized memory, which contains some of userA's sensitive information that was left in RAM, to disk.

I have to be missing something here. What is it? Is memory zero'd-out somewhere? Is kernel/pinned memory special in a relevant way?


Memory returned by malloc() may be nonzero, but only after being used and freed by other code in the same process. Never another process. The OS is supposed to rigorously enforce memory protections between processes, even after they have exited.

Kernel/pinned memory is only special in that it apparently gave a kernel mode driver the opportunity to break the OS's process protection guarantees.

So no, this is not expected behavior; yes, this was a bug. Kudos to NVIDIA for acting on it so quickly!


The only part that requires root priviledges to install CUDA is the NVIDIA driver. As a result all operations done using NVIDIA compiler and link can be done using regular system calls, and standard compiling (provided you have the proper information -lol-). If any security holes lies there, it remains, wether or not cudaHostAlloc/cuMemHostAlloc is modified.

I am dubious about the first answer seen on this post. The man page for malloc specifies that the memory is not cleared. The man page for free does not mention any clearing of the memory. The clearing of memory seems to be in the responsability of the coder of a sensitive section -lol-, that leave the problem of an unexpected (rare) exit. Apart from VMS (good but not widely used OS), I dont think any OS accept the performance cost of a systematic clearing. I am not clear about the way the system may track in the heap of a newly allocated memory what was previously in the process area, and what was not.

My conclusion is: if you need a strict level of privacy, do not use a multi-user system (or use VMS).

0

精彩评论

暂无评论...
验证码 换一张
取 消