I have a simulation written in C++ in which I need to maintain a variable number of agents, and I am having trouble deciding how to implement it well. Every agent looks something similar to:
class Agent{
public:
Vector2f pos;
float health;
float data[DATASIZE];
vector<Rule> rules;
}
I need to maintain a variable number of agents in my simulation such that:
- Preferably, there is no upper bound on the number of agents
- I can easily add an Agent
- I can easily remove any agent under some condition (say health<0)
- I can easily iterate all agents and do something (say health--)
- Preferably, I can parallelize the work using openMP, because many updates are somewhat costly, but completely independent of other agents.
- (edit) the order of the agents doesn't matter at all
What kind of container or design principles should I use for the agents? Until now I was using a v开发者_运维问答ector, but I think it pretty hard to erase from this structure: something I need to do quite often, as things die all the time. Are there any alternatives I should look at? I thought of something like List, but I don't think they can be parallelized because they are implemented as linked lists with iterator objects?
Thank you
You could leave the agent in the list when dead, ready for re-use. No worries about shrinking your container, and you retain the benefits of a vector. You could keep a separate stack of pointers to dead/reusable agents, just push onto it when an agent dies, pop one off to reclaim for a new agent.
foreach Agent {
if (agent.health > 0) // skip dead agents
process rules
Until now I was using a vector, but I think it pretty hard to erase from this structure: something I need to do quite often, as things die all the time.
How many do you actually expect to die per each step of your simulation? What seems like "all the time" to a human could still be considered very infrequent to a computer. For instance, if each step of your simulation processes thousands of agents but on average only 1 agent dies every few steps, then agent death is a minor incident. With those kind of numbers, your program spends far more time processing live agents than it does dealing with dead agents and so worrying about the performance of removing a dead agent may not be worth while at all. If making agent removal more efficient would end up making normal agent iteration and processing less efficient (yet agent removal is relatively rare), then that would probably be a poor trade-off.
On the other hand, if large numbers of agents are born and die every simulation step, then you might want to make sure those events can be handled efficiently. So it really depends on the kind of numbers you expect to be dealing with.
My general advice would be to proceed with using std::vector (as long as it fits the rest of your design) unless you really expect a significant number of agent deaths per step compared to the number of agents in total.
List should work pretty well. It can be parallelized, because inserting or removing an element does not invalidate other iterators (except of course iterators pointing to an element being removed).
If you don't need backward traversal, slist is as good as list, and a little faster.
If you don't care about the order of elements, use set.
Use a quadtree like in video games. Then searching on pos
is fast and removal is fast too. (Plus you can parallelize across child nodes).
精彩评论