I want to create functionality similar to PIL's Image.blend
, using a different blending algorithm. To do this would I need to: (1) directly modify the PIL modules and compile my own custom PIL or (2) write a python c module which imports and extends PIL?
I have unsuccessfully tried:
#include "_imaging.c"
I also was trying to just pull out the parts I need from the PIL source and put them in my own file. The farther I got in the more things I had to pull and it seems that is not the ideal solution.
UPDATE: edited to add the blending algorithm implemented in python (this emulates the overlay blending mode in Photoshop):
def overlay(upx, lpx):
return (2 * upx * lpx / 255 ) if lpx < 128 else ((255-2 * (255 - upx) * (255 - lpx) / 255))
def blend_images(upper = None, lower = None):
upixels = upper.load()
lpixels = lower.load()
width, height = upper.size
pixeldata = [0] * len(upixels[0, 0])
for x in range(width):
for y in range(height):
# the next for loop is to deal with images of any number of bands
for i in range(len(upixels[x,y])):
pixeldata[i] = overlay(upixels[x, y][i], lpixels[x, y][i])
upixels[x,y] = tuple(pixeldata)
return upper
I have also unsuccessfully tried implementing this using scipy's weave.inline
:
def blend_images(upper=None, lower=None):
upixels = numpy.array(upper)
lpixels = numpy.array(lower)
width, height = upper.size
nbands = len(upixels[0,0])
code = """
#line 120 "laplace.py" (This is only useful for debug开发者_开发技巧ging)
int upx, lpx;
for (int i = 0; i < width-1; ++i) {
for (int j=0; j<height-1; ++j) {
for (int k = 0; k < nbands-1; ++k){
upx = upixels[i,j][k];
lpx = lpixels[i,j][k];
upixels[i,j][k] = ((lpx < 128) ? (2 * upx * lpx / 255):(255 - 2 * (255 - upx) * (255 - lpx) / 255));
}
}
}
return_val = upixels;
"""
# compiler keyword only needed on windows with MSVC installed
upixels = weave.inline(code,
['upixels', 'lpixels', 'width', 'height', 'nbands'],
type_converters=converters.blitz,
compiler = 'gcc')
return Image.fromarray(upixels)
I'm doing something wrong with the upixel
and lpixel
arrays but I'm not sure how to fix them. I'm a bit confused about the type of upixels[i,j][k]
, and not sure what I could assign it to.
Here's my implementation in NumPy. I have no unit tests, so I do not know if it contains bugs. I assume I'll hear from you if it fails. Explanation of what is going on is in the comments. It processes a 200x400 RGBA image in 0.07 seconds
import Image, numpy
def blend_images(upper=None, lower=None):
# convert to arrays
upx = numpy.asarray(upper).astype('uint16')
lpx = numpy.asarray(lower).astype('uint16')
# do some error-checking
assert upper.mode==lower.mode
assert upx.shape==lpx.shape
# calculate the results of the two conditions
cond1 = 2 * upx * lpx / 255
cond2 = 255 - 2 * (255 - upx) * (255 - lpx) / 255
# make a new array that is defined by condition 2
arr = cond2
# this is a boolean array that defines where in the array lpx<128
mask = lpx<128
# populate the parts of the new arry that meet the critera for condition 1
arr[mask] = cond1[mask]
# prevent overflow (may not be necessary)
arr.clip(0, 255, arr)
# convert back to image
return Image.fromarray(arr.astype('uint8'), upper.mode)
精彩评论