1

我们正在尝试备份 Splunk 仪表板和报告源代码以进行版本控制。我们正在一个企业实施中,我们的休息电话受到限制。我们可以通过 Slunk UI 创建和访问仪表板和报告,但想知道我们是否可以自动备份它们并存储在我们的版本控制系统中。

4

2 回答 2

2

如果没有 REST 访问,自动版本控制将是一个相当大的挑战。我假设您没有 CLI 访问权限,或者您不会询问。

有可用的应用程序可以为您执行此操作。请参阅https://splunkbase.splunk.com/app/4355/https://splunkbase.splunk.com/app/4182/

还有一个关于该主题的 .conf 演示文稿。请参阅https://conf.splunk.com/files/2019/slides/FN1315.pdf

于 2021-08-17T19:13:26.250 回答
0

目前,我编写了一个 Python 脚本来读取/拦截 Splunk 报告 URL 的 UI 插入 -> 性能 -> 网络响应,其中列出了我的应用程序下的完整报告集及其完整详细信息。

from time import sleep
import json
from selenium import webdriver
from selenium.webdriver import DesiredCapabilities

# make chrome log requests
capabilities = DesiredCapabilities.CHROME
capabilities["goog:loggingPrefs"] = {"performance": "ALL"}  # newer: goog:loggingPrefs

driver = webdriver.Chrome(
    desired_capabilities=capabilities, executable_path="/Users/username/Downloads/chromedriver_92"
)

spl_reports_url="https://prod.aws-cloud-splunk.com/en-US/app/sre_app/reports"

driver.get(spl_reports_url)
sleep(5)  # wait for the requests to take place

# extract requests from logs
logs_raw = driver.get_log("performance")
logs = [json.loads(lr["message"])["message"] for lr in logs_raw]

# create directory to save all reports as .json files
from pathlib import Path
main_bkp_folder='splunk_prod_reports'
# Create a main directory in which all dashboards will be downloaded to
Path(f"./{main_bkp_folder}").mkdir(parents=True, exist_ok=True)


# Function to write json content to file
def write_json_to_file(filenamewithpath,json_source):
    with open(filenamewithpath, 'w') as jsonfileobj:
        json_string = json.dumps(json_source, indent=4)
        jsonfileobj.write(json_string)
        
def log_filter(log_):
    return (
        # is an actual response
        log_["method"] == "Network.responseReceived"
        # and json
        and "json" in log_["params"]["response"]["mimeType"]
    )

counter = 0

# extract Network entries from each log event
for log in filter(log_filter, logs):
    #print(log)
    request_id = log["params"]["requestId"]
    resp_url = log["params"]["response"]["url"]
    # print only results_preview 
    if  "searches" in resp_url:
        print(f"Caught {resp_url}")
        counter = counter + 1
        nw_resp_body = json.loads(driver.execute_cdp_cmd("Network.getResponseBody", {"requestId": request_id})['body'])
        for each_report in nw_resp_body["entry"]:
            report_name = each_report['name']
            print(f"Extracting report source for {report_name}")
            report_filename = f"./{main_bkp_folder}/{report_name.replace(' ','_')}.json"
            write_json_to_file(report_filename,each_report)
            print(f"Completed.")

print("All reports source code exported successfully.")

上面的代码与生产版本相去甚远,还没有添加错误处理、日志记录和模块化。另外,要注意上述脚本使用浏览器 UI,在生产中,脚本将在带有 ChromeOptions 的 docker 映像中运行以使用无头模式。

代替

driver = webdriver.Chrome(
    desired_capabilities=capabilities, executable_path="/Users/username/Downloads/chromedriver_92"
)

利用:

from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--window-size=1420,2080')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
driver = webdriver.Chrome(
    desired_capabilities=capabilities,options=chrome_options, executable_path="/Users/username/Downloads/chromedriver_92"
)

在这里,您可以根据需要进行定制。

于 2021-09-04T03:51:52.873 回答