开发者

C map/hash-table that's keyed by integers and maps to void pointers

开发者 https://www.devze.com 2023-02-02 01:34 出处:网络
I am re-writing a light weight image server I wrote in Python using epoll into c (not c++). I want to write a (or use an existing) very simple map or hash-table that maps integer keys (file descriptor

I am re-writing a light weight image server I wrote in Python using epoll into c (not c++). I want to write a (or use an existing) very simple map or hash-table that maps integer keys (file descriptors) to void pointers. What's a good way to go about doing this? I don't need to be able to support any generic types of keys or even strings. I have one idea:

// Initialize map.
size_t map_size = 50;
void ** map = (void **)malloc(sizeof(void *) * map_size);
memset((void *)map, 0, map_size);

// Set values for keys 3, 20, 67
int key_a = 3;
int key_b = 20;
int key_c = 67;
void开发者_StackOverflow * value_a = ...;
void * value_b = ...;
void * value_c = ...;

// NOTE: This does not take into account conflicting keys. I would probably solve
// that using an array or linked-list and comparing keys.
map[key_a % map_size] = value_a;
map[key_b % map_size] = value_b;
map[key_c % map_size] = value_c;

Is this sensible or are there much better ways to accomplish this? Or can someone point me in the right direction to finding an answer?


File descriptors are small integers on most systems, and often contiguous, as they are used as indices inside the kernel. Hence I propose to just create an array from 0..maxfd (growing dynamically), and use the file descriptor as an integer - with no hashing at all.

As a safe guard, you may want to protect against systems that use different strategies for allocating file descriptors, e.g. aborting if it is larger than 2^20.


Use the public-domain implementation of a generic C hash table in Ruby's codebase -- st.c.


There's nothing wrong with using a simple modulus as a "hash algorithm" per se, but it only works well if you know the results will be evenly distributed. In your case, however, you can't technically count on that with file descriptors, since there's no particular guarantee as to what numbers you'll get back from the open/fopen calls.

There are very simple hash algorithms out there that are pretty fast and work well enough for general use cases. You could consider the FNV family, or even the dead-simple Pearson hash.

That said, I'm a bit curious as to why you want a hash table keyed off of file descriptors. That seems like an odd design detail, and makes me think you're overcomplicating something.


Others have raised good points about whether this is really what you want to do, but just to answer your immediate question, the glibc hashtable functions should be available on most systems. Note that you almost certainly want to use the _r variants (hcreate_r, hsearch_r, hdestroy_r), since the vanilla versions create and manipulate a single, global hashtable.

0

精彩评论

暂无评论...
验证码 换一张
取 消