3

我正在尝试以编程方式截取视网膜屏幕截图,并且我已经尝试了在网上找到的所有方法,但我无法将屏幕截图变成视网膜。

我了解以下私有 API:

UIGetScreenImage();

不能使用,因为 Apple 会拒绝您的应用。但是,此方法返回的正是我需要的(屏幕的 640x960 屏幕截图)。

我已经在我的 iPhone 4 以及视网膜硬件上的 iPhone 4 模拟器上尝试过这种方法,但生成的图像始终是 320x480。

-(UIImage *)captureView
{

    AppDelegate *appdelegate = [[UIApplication sharedApplication]delegate];


    if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)])
        UIGraphicsBeginImageContextWithOptions(appdelegate.window.bounds.size, NO, 0.0);
        else
            UIGraphicsBeginImageContext(appdelegate.window.bounds.size);


            [appdelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    NSLog(@"SIZE: %@", NSStringFromCGSize(image.size));
    NSLog(@"scale: %f", [UIScreen mainScreen].scale);


    return image;
}

我也尝试过苹果推荐的方式:

- (UIImage*)screenshot
{
    // Create a graphics context with the target size
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    CGSize imageSize = [[UIScreen mainScreen] bounds].size;
    if (NULL != UIGraphicsBeginImageContextWithOptions)
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
        else
            UIGraphicsBeginImageContext(imageSize);

            CGContextRef context = UIGraphicsGetCurrentContext();

            // Iterate over every window from back to front
            for (UIWindow *window in [[UIApplication sharedApplication] windows])
            {
                if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen])
                {
                    // -renderInContext: renders in the coordinate space of the layer,
                    // so we must first apply the layer's geometry to the graphics context
                    CGContextSaveGState(context);
                    // Center the context around the window's anchor point
                    CGContextTranslateCTM(context, [window center].x, [window center].y);
                    // Apply the window's transform about the anchor point
                    CGContextConcatCTM(context, [window transform]);
                    // Offset by the portion of the bounds left of and above the anchor point
                    CGContextTranslateCTM(context,
                                          -[window bounds].size.width * [[window layer] anchorPoint].x,
                                          -[window bounds].size.height * [[window layer] anchorPoint].y);

                    // Render the layer hierarchy to the current context
                    [[window layer] renderInContext:context];

                    // Restore the context
                    CGContextRestoreGState(context);
                }
            }

    // Retrieve the screenshot image
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    NSLog(@"Size: %@", NSStringFromCGSize(image.size));

    return image;
}

但它也返回非视网膜图像:2012-12-23 19:57:45.205 PostCard[3351:707] size: {320, 480}

我有什么明显的遗漏吗?怎么会有假设采取视网膜屏幕截图的方法返回非视网膜屏幕截图?提前致谢!

4

1 回答 1

3

我看不出你的代码有什么问题。除此之外image.size,您是否尝试过记录image.scale?是1还是2?如果它是 2,它实际上是一个视网膜图像。

UIImage.scale表示图像的比例。因此,UIImage.size320×480 和UIImage.scale2 的图像的实际尺寸为 640×960。来自苹果的文档:

如果您将图像的逻辑大小(存储在size属性中)乘以此属性中的值,您将获得图像的尺寸(以像素为单位)。

UIImage这与使用@2x修饰符将图像加载到 a 时的想法相同。例如:

a.png (100×80)      => size=100×80 scale=1
b@2x.png (200×160)  => size=100×80 scale=2
于 2012-12-24T01:51:19.320 回答