1

我想要做的是用 pyautogui 制作一个数字的屏幕截图,然后用 pytesseract 将数字转换为字符串。代码: import pyautogui import time import PIL from PIL import Image import pytesseract

pytesseract.pytesseract.tesseract_cmd = 'C://Program Files (x86)//Tesseract-OCR//tesseract'

# Create image
time.sleep(5)
image = pyautogui.screenshot('projects/output.png', region=(1608, 314, 57, 41))

# Resize image
basewidth = 2000
img = Image.open('projects/output.png')
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
img.save('projects/output.png')

col = Image.open('projects/output.png')
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save('projects/output.png')

# Image to string
screen = Image.open('projects/output.png')

print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))

现在看来 pytesseract 不接受 pyautogui 创建的屏幕截图。代码运行良好,没有问题,但打印一个空字符串。但是,如果我在绘画中创建图像,并将其作为“output.png”保存到正确的文件夹中,就像截屏一样,它确实可以工作。

调整大小和调整后的图像输出

有人知道我在哪里遗漏了什么吗?

4

2 回答 2

2

修改路径并尝试以下操作:

import numpy as np
from numpy import *
from PIL import Image
from PIL import *
import pytesseract
import cv2


src_path = "C:\\Users\\USERNAME\\Documents\\OCR\\"

def get_region(box):
    #Grabs the region of the box coordinates
    im = ImageGrab.grab(box)
    #Change size of image to 200% of the original size
    a, b, c, d = box
    doubleX = (c - a) * 2
    doubleY = (d - b) * 2
    im.resize((doubleX, doubleY)).save(os.getcwd() + "\\test.png", 'PNG')

def get_string(img_path):
    # Read image with opencv
    img = cv2.imread(img_path)
    # Convert to gray
    img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # Apply dilation and erosion to remove some noise
    kernel = np.ones((1, 1), np.uint8)
    img = cv2.dilate(img, kernel, iterations=1)
    img = cv2.erode(img, kernel, iterations=1)
    # Write image after removed noise
    cv2.imwrite(src_path + "removed_noise.png", img)
    #  Apply threshold to get image with only black and white
    #img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
    # Write the image after apply opencv to do some ...
    cv2.imwrite(src_path + "thres.png", img)
    # Recognize text with tesseract for python

    result = pytesseract.image_to_string(Image.open(src_path + "thres.png"))

    return result

def main():
    #Grab the region of the screenshot (box area)
    region = (1354,630,1433,648)
    get_region(region)

    #Output results
    print ("OCR Output: ")
    print (get_string(src_path + "test.png"))
于 2017-06-23T04:48:29.893 回答
0

将其转换为 numpy 数组,pytesseract 接受这些。

import numpy as np
import pyautogui

img=np.array(pyautogui.screenshot())
print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))

或者,我会推荐使用“mss”来截屏,因为它们的速度要快得多。

import mss
with mss.mss() as sct:
    img = np.array(sct.grab(sct.monitors[1]))
print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))
于 2020-03-25T03:19:09.083 回答