Let's say I have an image (e.g. 1024 x 768 px) which is displayed in a UIImageView (e.g. 300 x 300 px). Now, I'd like to convert a point from the image, e.g. the position of a person's nose (x: 500, y:600), to the corresponding point on the UIImageView with its contentMode taken into account.
If the contentMode is fixed at UIViewContentModeScaleToFill, conversion will be easy. But if it's UIViewContentModeScaleAspectFit, things getting more complex.
Is there an elegant way to achieve that? I don't really want to calcula开发者_运维百科te that for every single contentMode (more than 10, I think).
Thanks!
Today, I've had some spare time to put together my solution to this problem and publish it on GitHub: UIImageView-GeometryConversion
It's a category on UIImageView that provides methods for converting points and rects from the image to view coordinates and respects all of the 13 different content modes.
Hope you like it!
That's my quick'n'dirty solution:
- (CGPoint)convertPoint:(CGPoint)sourcePoint fromContentSize:(CGSize)sourceSize {
CGPoint targetPoint = sourcePoint;
CGSize targetSize = self.bounds.size;
CGFloat ratioX = targetSize.width / sourceSize.width;
CGFloat ratioY = targetSize.height / sourceSize.height;
if (self.contentMode == UIViewContentModeScaleToFill) {
targetPoint.x *= ratioX;
targetPoint.y *= ratioY;
}
else if(self.contentMode == UIViewContentModeScaleAspectFit) {
CGFloat scale = MIN(ratioX, ratioY);
targetPoint.x *= scale;
targetPoint.y *= scale;
targetPoint.x += (self.frame.size.width - sourceSize.width * scale) / 2.0f;
targetPoint.y += (self.frame.size.height - sourceSize.height * scale) / 2.0f;
}
else if(self.contentMode == UIViewContentModeScaleAspectFill) {
CGFloat scale = MAX(ratioX, ratioY);
targetPoint.x *= scale;
targetPoint.y *= scale;
targetPoint.x += (self.frame.size.width - sourceSize.width * scale) / 2.0f;
targetPoint.y += (self.frame.size.height - sourceSize.height * scale) / 2.0f;
}
return targetPoint;
}
- (CGRect)convertRect:(CGRect)sourceRect fromContentSize:(CGSize)sourceSize {
CGRect targetRect = sourceRect;
CGSize targetSize = self.bounds.size;
CGFloat ratioX = targetSize.width / sourceSize.width;
CGFloat ratioY = targetSize.height / sourceSize.height;
if (self.contentMode == UIViewContentModeScaleToFill) {
targetRect.origin.x *= ratioX;
targetRect.origin.y *= ratioY;
targetRect.size.width *= ratioX;
targetRect.size.height *= ratioY;
}
else if(self.contentMode == UIViewContentModeScaleAspectFit) {
CGFloat scale = MIN(ratioX, ratioY);
targetRect.origin.x *= scale;
targetRect.origin.y *= scale;
targetRect.origin.x += (self.frame.size.width - sourceSize.width * scale) / 2.0f;
targetRect.origin.y += (self.frame.size.height - sourceSize.height * scale) / 2.0f;
targetRect.size.width *= scale;
targetRect.size.height *= scale;
}
else if(self.contentMode == UIViewContentModeScaleAspectFill) {
CGFloat scale = MAX(ratioX, ratioY);
targetRect.origin.x *= scale;
targetRect.origin.y *= scale;
targetRect.origin.x += (self.frame.size.width - sourceSize.width * scale) / 2.0f;
targetRect.origin.y += (self.frame.size.height - sourceSize.height * scale) / 2.0f;
targetRect.size.width *= scale;
targetRect.size.height *= scale;
}
return targetRect;
}
When it's refactored and optimized, I'll publish it on github and post the link down here. Maybe even that snippet could be helpful for someone.
Nope, there is no built-in, public, or elegant way to do it.
You need to reverse engineer the functions for the content modes you need yourself.
Why cant you just rescale the coordinates:
X(UIImageView) = (X(Image)/1024)*300
and
Y(UIImageView) = (Y(Image)/768)*300
精彩评论