I am investigating if GPGPUs could be used for accelerating simulation of hardware. My reasoning is this: As hardware by nature is very parallel, why simulate on highly sequential CPUs?
GPUs would be excellent for this, if not for their restrictive style of programming: You have a single kernel running, etc.
I开发者_JS百科 have little experience with GPGPU-programming, but is it possible to use events or queues in OpenCL / CUDA?
Edit: By hardware simulation I don't mean emulation, but bit-accurate behavorial simulation (as in VHDL behavioral simulation).
I am not aware of any approaches regarding VHDL simulation on GPUs (or a general scheme to map discrete-event simulations), but there are certain application areas where discrete-event simulation is typically applied and which can be simulated efficiently on GPUs (e.g. transportation networks, as in this paper or this one, or stochastic simulation of chemical systems, as done in this paper).
Is it possible to re-formulate the problem in a way that makes a discrete time-stepped simulator feasible? In this case, simulation on a GPU should be much simpler (and still faster, even if it seems wasteful because the time steps have to be sufficiently small - see this paper on the GPU-based simulation of cellular automata, for example).
Note, however, that this is still most likely a non-trivial (research) problem, and the reason why there is no general scheme (yet) is what you already assumed: implementing an event queue on a GPU is difficult, and most simulation approaches on GPUs gain speed-up due to clever memory layout and application-specific optimizations and problem modifications.
This is outside my area of expertise, but it seems that while the following paper discusses gate-level simulation rather than behavioral simulation, it may contain some useful ideas:
Debapriya Chatterjee, Andrew Deorio, Valeria Bertacco. Gate-Level Simulation with GPU Computing http://web.eecs.umich.edu/~valeria/research/publications/TODAES0611.pdf
精彩评论