开发者

How to share a linked list between two processes?

开发者 https://www.devze.com 2022-12-20 18:59 出处:网络
I have two processes and i want to share a linked list between them. One of the processes is just going to read the list while other process is going to modify the

I have two processes and i want to share a linked list between them. One of the processes is just going to read the list while other process is going to modify the list ( add/delete entries). Can you tell me how to achieve it?


Let me add more details to it the language is C and the platform is Linux. It seems that shared memory is one approach but i do not know how to implement it.

IF any one can tell me the way of achieving it then it will be of great help to me.

I have one idea as mentioned below: Can i do some开发者_StackOverflow社区thing like this where i create a segment of shared memory of size of the node. Then i simply deal with malloc?What i mean is i will create the shared memory with help of shmget(key, SHMSZ, IPC_CREAT|0666)where SHMSZ will be of size of struct node. So i only share head of list between two process. The first entry in list will have all values of 0 except link entry which will point to next entry in list and that entry is created with help of malloc since In my application since other process is going to read only while one process is going to add/delete entries in list.

I got one reply which tells me that i can not use malloc. I do not know why i can not use malloc. Could you please tell me why i can not use malloc?

Below is my code for the above mentioned purpose which i have been trying but getting segmentation fault.

struct node
{
    int val;
    struct node* next;
};

void append(struct node *q,int val);
main()
{

    key_t key = 5678;

    int shmid;

    struct node *head;

    if ((shmid = shmget(key, sizeof(struct node), IPC_CREAT | 0666)) < 0) {
        perror("shmget");
        exit(1);
    };

    head = struct node*(shmat(shmid, (void *) 0, 0));

    head->val = 0;
    head->next= 0;

    append(head,2);
    append(head,5);
    append(head,6);

    exit(0);
}

void append(struct node *q,int val)

{
    struct node *temp1,*temp2;

    if (q->next == 0)

    {
        temp1=malloc(sizeof(struct node));
        temp1->val = val;
        temp1->next = 0;
        q->next = temp1;
    }

    else

    {
        temp2=malloc(sizeof(struct node));
        temp2->val = val;

        temp1 = q->next;
        while(1)
        {
            if (temp1 == 0)
            {
                temp1=temp2;
                break;
            }
            else
                temp1=temp1->next;
        }
    }

    return;

}


I don't know who Beej is but he writes excellent guides, and he's written one for Shared memory segments.


You've got a bunch of options:

  • use inter process communication (sockets, named pipes, whatever) to send a duplicate list to each process that needs a copy. You'll have duplicate lists which could mean you incur duplicate work, and you'll have a problem of keeping the lists in sync - this may or may not be relevant for your problem.
  • use a list server, and inter process communication to request or set list elements. You'll need to gracefully handle cases where the client never returns a list item (perhaps with a heartbeat and/or reservation time)
  • use shared memory to directly share the list between processes. You'll need to be very careful about locking, and it won't work across different systems.
  • use fork(), and let the child process inherit the list. Like the first solution, you'll wind up with duplicate lists with (potentially) duplicate work, or duplicate elements. Like the third solution, this doesn't work across systems.
  • use some combination of the above.


Simple answer: Don't.

The benefit of multiple processes is that you don't have shared memory. This gives you isolation from the other process. Having a "data-only" barrier in place gives you great benefits as far as design and robustness go.

Instead, have the producing process send the new data (as data) to the consuming process. This can be done via STDIN/STDOUT, or a socket, or even something as low tech as a file.


The reason you cannot use malloc for the individual nodes is because malloc returns a pointer into the heap for a given process. The pointer means nothing to another process, which has its own heap. The address might be valid in both processes, but the "physical" memory that it refers to will be different.

If you are using shmget, you would need to use it for each node (or create a larger chunk and then use offsets into it as was already suggested). If individual shmget calls were used for each node, then the key could be the next pointer in the list. That would be a "simple" way of implementing it, but it might not be a good idea. I believe that there is a system-wide limit on shared memory objects. I think it is defined by SHMMNI, which is 4096 on a system I just checked.

Another way of implementing it would be to simply make one process the owner of the list (the process that creates and deletes nodes). Then use a single shared memory object to communicate between the two processes to hand out individual nodes. Then used shared semaphores (semget, semop, etc.) to synchronize.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号