2

我想通过这些链接后面的程序(主要是 Python)提取 Coursera 视频下载链接

https://www.coursera.org/learn/human-computer-interaction/lecture/s4rFQ/the-interaction-design-specialization

https://www.coursera.org/learn/calculus1/lecture/IYGhT/why-is-calculus-going-to-be-so-much-fun

在红了很多关于这个的文章之后,仍然找不到通过程序提取视频下载链接的方法,任何人都可以提供提取视频下载链接的逐步解决方案?谢谢!

PS我知道这个项目,但是代码太复杂了,所以我退出了。


感谢您的回答,我已经成功做了一个chrome扩展来下载视频http://mathjoy.tumblr.com/post/130547895523/mediadownloader

4

2 回答 2

1

我今天使用pythonselenium包 、chromewebdriver使自己成为了 Coursera-Downloader 。(答案的其余部分需要这 3 个工具)

首先,您需要在coursera中找到您想要的课程并注册。之后,您应该完成下面的代码并运行它。这将需要一段时间,但结果(所有视频链接)将写入文本文件中:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

# ########################### #
# ####-fill these vars-###### #
# ########################### #

# coursera login information:

username = "~"  # e.g. : username = "john@doe.com"
password = "~"  # e.g. : password = "12345asdfg"

# course details to download IMPORTANT: you should be enrolled in the course

path_course = "https://www.coursera.org/learn/programming-languages/home/week/1"  # link to the course first week e.g. : path_course = "https://www.coursera.org/learn/game-theory-1/home/week/1"
num_of_weeks = 5  # number of course weeks(or desired weeks to download) e.g. : num_of_weeks = 5
path_to_save = "E:\\programming-languages-coursera-links.txt"  # path to the file in wich the links will be saved e.g. : path_to_save = "E:\\interactive-python-links.txt"
#############################
#############################
#############################
print_each_link = False


# defining functions :
def get_links_of_week(week_add):
    """
    this function gets the download links from the course.
    :param week_add: address to the specific week in order to get links
    :return: a list containing all download links regarding the specific week.
    """
    driver.get(week_add)
    print("going for" + week_add)
    driver.implicitly_wait(5)
    elems = driver.find_elements_by_xpath("//a[@href]")
    links = []
    for elem in elems:
        sublink = elem.get_attribute("href")
        # print(sublink)
        if sublink.find("lecture") != -1 and sublink not in links:
            links.append(sublink)
    # print("---------------")
    # print(links)
    inner_links = []

    for link in links:
        driver.get(link)
        driver.implicitly_wait(5)
        inner_elems = driver.find_elements_by_xpath("//a[@href]")
        for inelem in inner_elems:
            sub_elem = inelem.get_attribute("href")

            # print(sub_elem)
            if sub_elem.find("mp4") != -1:
                print("the link : " + sub_elem[37:77] + "... fetched")
                inner_links.append(sub_elem)

    return inner_links


def get_week_list():
    """
    this function gets the URL address from predefined variables from the top
    :return: a list containing each week main page.
    """
    weeks = []
    print('list of weeks are : ')
    for x in range(1, num_of_weeks + 1):
        weeks.append(path_course[:-1] + str(x))
        print(path_course[:-1] + str(x))
    return weeks


# loading chrome driver
driver = webdriver.Chrome("E:\\chromedriver.exe")
# login to Coursera
driver.get(path_course)
driver.implicitly_wait(10)
email = driver.find_element_by_name("email")
email.click()
email.send_keys(username)
pas = driver.find_element_by_name("password")
pas.click()
pas.send_keys(password)
driver.find_element_by_xpath("//*[@id=\"rendered-content\"]/div/div/div/div[3]/div/div/div/form/button").send_keys(
    Keys.RETURN)

# fetching links from each week web page
weeks_link = get_week_list()
all_links = []
for week in weeks_link:
    all_links += get_links_of_week(week)
driver.close()

# write to file
print("now writing to file ...")
text_file = open(path_to_save, "w")
for each_link in all_links:
    if print_each_link:
        print(each_link + "\n")
    text_file.write(each_link)
    text_file.write("\n")
text_file.close()
print("---------------------------------")
print("all Links are fetched successfully")

如果您遇到任何麻烦,请在此处评论。

于 2016-09-08T19:15:39.413 回答
0

我会使用RequestsLib向页面、登录等发出请求,并使用Beautiful Soup来解析结果数据。一般的想法是

r = requests.get(url_you_want)
domTree = BeautifulSoup(r.text)
link=domTree.find(id="WhateverIDTheLinkHasInTheDownloadPage")
[...etc...]

如果您希望有人为您完成全部工作,我无法帮助您,但是...

于 2015-06-30T09:21:52.703 回答