开发者

128-bit type error

开发者 https://www.devze.com 2022-12-10 19:08 出处:网络
Thanks to pbos help and its program (published here, for xoring a big file), I performed some tests and I see that I have another problem: having开发者_如何学Python changed the mask by another one of

Thanks to pbos help and its program (published here, for xoring a big file), I performed some tests and I see that I have another problem: having开发者_如何学Python changed the mask by another one of 128-bit, there is no default type as big as needed.

I think a solution can be to include a library to increase available integer types... but rather that I prefer to interpret each 128 bits values such as strings. Is this possible without loosing performance?

Here is the current program (with the bug "integer constant is too large for its type"):

#include <stdio.h>
#include <stdlib.h>

#define BLOCKSIZE 128
#define MASK 0xA37c54f173f02889a64be02f2bc44112 /* a 128 bits constant */

void
usage(const char *cmd)
{
    fprintf(stderr, "Usage: %s <in-file> [<out-file>]\n", cmd);
    exit (EXIT_FAILURE);
}

int
main(int argc, char *argv[])
{
  if (argc < 3) usage(argv[0]);

  FILE *in = fopen(argv[1], "rb");

  if (in == NULL)
  {
    printf("Cannot open: %s", argv[2]);
    return EXIT_FAILURE;
  }

  FILE *out = fopen(argv[2], "wb");

  if (out == NULL)
  {
    fclose(in);
    printf("Unable to open '%s' for writing.", argv[2]);
  }

  char buffer[BLOCKSIZE];
  int count;

  while (count = fread(buffer, 1, BLOCKSIZE, in))
  {
    int i;
    for (i = 0; i < count; i++)
    {
      ((unsigned long *)buffer)[i] ^= MASK; /* this line is bugged */
    }

    if (fwrite(buffer, 1, count, out) != count)
    {
      fclose(in);
      fclose(out);
      printf("Cannot write, disk full?\n");
      return EXIT_FAILURE;
    }
  }

  fclose(in);
  fclose(out);

  return EXIT_SUCCESS;
}

Thank for any suggestions.

Doug


Mask even "chunks" with the first half of your constant and odd "chunks" with the other half.

/* ... */
unsigned int xormask[2] = {0xA37c54f173f02889, 0xa64be02f2bc44112};
/* ... */
        for(i = 0;i < end; ++i)
        {
                ((unsigned int *)buffer)[i] ^= xormask[i & 1];
        }
/* ... */


Walk through your key and plaintext byte-by-byte and perform the XOR on each byte separately.

Change you key to be an array of bytes and create a pointer to that array for convenience in using the key:

char const key[] = {
    0xA3, 0x7c, 0x54, 0xf1, 
    0x73, 0xf0, 0x28, 0x89, 
    0xa6, 0x4b, 0xe0, 0x2f, 
    0x2b, 0xc4, 0x41, 0x12
};

char const* pKeyByte = key;

Then change the line where you encrypt from

((unsigned long *)buffer)[i] ^= MASK; /* this line is bugged */

to:

buffer[i] ^= *pKeyByte++;
if (pKeyByte == (key + sizeof(key))) {
    /* wrap to start of key */
    pKeyByte = key;
}

Now you can change BLOCKSIZE to whatever you'd like your I/O size to be, regardless of the key length. Note that as you're using it, BLOCKSIZE defines the number of bytes to be read in from the file in each loop - it's not the number of bits in the key.

Note that the caveats about XOR encryption I posted in your last question still apply.


Not all (many?) platforms will support 128-bit integers. You could factor out the code that deals directly with the 128-bit numbers and write two versions of it, one for platforms that support 128-bit numbers and another for those that don't. A configuration script could check for 128-bit support (check "sizeof(long long)" and for "uint128_t", perhaps?) and pick between the two implementations.


You are going to need to break your mask into chunks as well.

unsigned int mask[] = { 0xA37c54f1, 0x73f02889, 0xa64be02f, 0x2bc44112 };
unsigned int mask_len = sizeof(mask) / sizeof(*mask);

You will then need to work with the buffer you read in as a series of unsigned int rather then chars and xor the chunk against the appropriate chunk of the mask:

unsigned int *uint_buffer = (unsigned int *)buffer;
for (i = 0; i < count / sizeof(int); i++)
{
    uint_buffer[i] ^= mask[i % mask_len];
}

Finally, depending on the specifics of your task, you may need to deal with endian issues with the data you read in from the file.


Based on Burr's answer, I would prefer this:

int i;
int sizeOfKey = sizeof(key); // key is an array of chars
for (i = 0; i < count; i++)
{
    buffer[i] ^= key[i % sizeOfKey];
}

Having the inner loop presumes that your is aligned at 16 bytes (i.e. buffer[i + j] doesn't check for the actual length of the buffer).


Sizes of integer types are a core language characteristic. It cannot be changed by including any libraries. If you insist on using the built-in integral types, the only way to overcome ths limitition is to switch to another implementation that natively supports larger integer types at the core language level.

Otherwise, you'll have to either explictly split your "long" constant into two or more "shorter" constants, or use some library that will do exactly the same "under the hood".

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号