开发者

C threading in linux?

开发者 https://www.devze.com 2023-01-06 13:30 出处:网络
Does someone have a simple example of threading in c? I want to build a small console app that will read a txt file file line by line and then use threads to process the entire txt. How should I do t

Does someone have a simple example of threading in c?

I want to build a small console app that will read a txt file file line by line and then use threads to process the entire txt. How should I do this? splitting the txt into X where X=N of threads, is the first thi开发者_运维知识库ng that comes to my mind, is there a better way?


Search for pthreads. I'm also a thread newbie. Here is a code snippet to sum from 1 to 1000000000 (also my first working pthread program).

#include <stdio.h>
#include <pthread.h>

struct arg {
    int a, b;
    int *rst;
};
typedef struct arg arg;

void* sum(void *);

int main()
{
    pthread_t sum1, sum2;
    int s1, s2;
    pthread_create(&sum1, NULL, sum, &(arg){1, 500000000, &s1});
    pthread_create(&sum2, NULL, sum, &(arg){500000001, 1000000000, &s2});   
    pthread_join(sum1, NULL);
    pthread_join(sum2, NULL);
    printf("%d\n", s1 + s2);
}

void* sum(void *ptr)
{
    int i, temp = 0;
    arg *x = ptr;

    for(i = x->a; i <= x->b; ++i)
        temp += i;
    *(x->rst) = temp;   
}


The best option IMHO is to use POSIX threads. You can see more details HERE.

Also please check the link in James' answer.


Search for POSIX threads, also known as pthreads. Tutorial Here


If you want an easy way to do it, OpenMP is a powerful multithreading library which is supported by gcc.

  #omp parallel for
  for(i=0; i<1000; i++){
    a[i] = b[i] + c[i];
  }

This will perform simple addition of two arrays and store the result in "a", but on a quad core machine, will spawn 4 threads to handle it, ( 8 if hyperthreading is supported).

Easy multicore programming on Linux. :)

A guide by a Finn: http://bisqwit.iki.fi/story/howto/openmp/


First thing is asking yourself whether you really need to do multi-threading here. Do you need shared state between the threads, e.g. does the parses information from all the URLS end up in the same data structure? If not, processes (fork) might be sufficient. Or you might not even go that far and just use event-based programming (glib, libev).

Glib might be worth your while even if you decide to use threads after all, as it has a decent thread abstraction, including thread pools. This would make partitioning your file very easy, as you just create X thread pools, then add the dl/parse pools to one of them (line-no. % pool size).

If it's just about speeding up downloads, maybe your http library already has related functionality. For curl, there's a bunch of curl_multicalls, with an interesting example here.


splitting the txt into X where X=N of threads, is the first thing that comes to my mind, is there a better way?

It depends on your application.

  • Threads may help if interpreting the data is the bottleneck, the performance gain will be limited by the file I/O speed
  • Threads wont help if reading the file is the bottleneck, the disk I/O is limited by the hardware and will only degrade if more threads request data

If interpreting the information takes long you can use something like the producer consumer pattern and test yourself how much threads you need. (try with a low number and see how many give you the best performance). Some examples can be found here and here

As the other answers point out you can use pthreads to implement threading.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号