1

我每天都在尝试在 Jupyter Notebook(来自 deepnote.com)中运行一个自动化进程,但是在运行 a 的第一次迭代 while loop 并开始下一次迭代(在for loop内部while loop)之后,虚拟机崩溃并抛出留言如下:

KernelInterrupted:被 Jupyter 内核中断的执行

这是代码:

.
.
.

while y < 5:
    print(f'\u001b[45m Try No. {y} out of 5 \033[0m')

    #make the driver wait up to 10 seconds before doing anything.

    driver.implicitly_wait(10)

    #values for the example.
    #Declaring several variables for looping.
    #Let's start at the newest page.

    link = 'https...'
    driver.get(link)

    #Here we use an Xpath element to get the initial page

    initial_page = int(driver.find_element_by_xpath('Xpath').text)
    print(f'The initial page is the No. {initial_page}')   
    final_page = initial_page + 120
    
    pages = np.arange(initial_page, final_page+1, 1)
    minimun_value = 0.95
    maximum_value = 1.2
    
    #the variable to_place is set as a string value that must exist in the rows in order to be scraped.
    #if it doesn't exist it is ignored.
    to_place = 'A particular place'

    #the same comment stated above is applied to the variable POINTS.
    POINTS = 'POINTS'

    #let's set a final dataframe which will contain all the scraped data from the arange that
    #matches with the parameters set (minimun_value, maximum value, to_place, POINTS).
    df_final = pd.DataFrame()
    dataframe_final = pd.DataFrame()
    #set another final dataframe  for the 2ND PART OF THE PROCESS.
    initial_df = pd.DataFrame()

    #set a for loop for each page from the arange.
    for page in pages:
        #INITIAL SEARCH.
        #look for general data of the link.
        #amount of results and pages for the execution of the for loop, "page" variable is used within the {}. 
        url = 'https...page={}&p=1'.format(page)
        
        print(f'\u001b[42m Current page: {page} \033[0m '+'\u001b[42m Final page: '+str(final_page)+'\033[0m '+'\u001b[42m Page left: '+str(final_page-page)+'\033[0m '+'\u001b[45m Try No. '+str(y)+' out of '+str(5)+'\033[0m'+'\n')
        driver.get(url)
        #Here we order the scrapper to try finding the total number of subpages a particular page has if such page IS NOT empty.
        #if so, the scrapper will proceed to execute the rest of the procedure.
        try:
            subpages = driver.find_element_by_xpath('Xpath').text
            print(f'Reading the information about the number of subpages of this page ... {subpages}')
            subpages = int(re.search(r'\d{0,3}$', subpages).group())
            print(f'This page has {subpages} subpages in total')
                            
            df = pd.DataFrame()
            df2 = pd.DataFrame()
            
            print(df)
            print(df2)
            
            #FOR LOOP.
            #search at each subpage all the rows that contain the previous parameters set.
            #minimun_value, maximum value, to_place, POINTS.
            
            #set a sub-loop for each row from the table of each subpage of each page
            for subpage in range(1,subpages+1):
            
                url = 'https...page={}&p={}'.format(page,subpage)
                driver.get(url)
                identities_found = int(driver.find_element_by_xpath('Xpath').text.replace('A total of ','').replace(' identities found','').replace(',',''))
                identities_found_last = identities_found%50
                
                print(f'Página: {page} de {pages}') #AT THIS LINE CRASHED THE LAST TIME
                .
                .
                .
        #If the particular page is empty
        except:
            print(f'This page No. {page} IT'S EMPTY ¯\_₍⸍⸌̣ʷ̣̫⸍̣⸌₎_/¯, ¡NEXT! ')             
    .  
    .
    .

    y += 1

最初我认为这 KernelInterrupted Error 是由于我的虚拟机在运行第二次迭代时缺少虚拟内存而引发的......

但是经过几次测试后,我发现我的程序根本不消耗 RAM,因为虚拟 RAM 在整个过程中并没有发生太大变化,直到内核崩溃。我可以保证。

所以现在我认为可能是我的虚拟机的虚拟 CPU是导致内核崩溃的原因,但如果是这样的话,我就是不明白为什么,这是我第一次处理这种情况,这个程序在我的电脑上完美运行

这里有任何数据科学家或机器学习工程师可以帮助我吗?提前致谢。

4

1 回答 1

2

我在 Deepnote 社区论坛本身找到了答案,只是这个平台的“免费层”机器不保证永久运行(24 小时 / 7 小时),无论其 VM 中执行的程序如何。

而已。问题解决了。

问题解决了

于 2021-10-13T02:45:06.190 回答