Skip to content

Commit

Permalink
修复了一个BUG
Browse files Browse the repository at this point in the history
修复了 搜番 因源加了cloudflare反DDOS保护而无法正常检索的BUG
  • Loading branch information
Angel-Hair committed Aug 23, 2020
1 parent 7150675 commit 8c0f180
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 19 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ TIMELIMIT_IMAGE: float = 7 # 识图功能的时间限制
TIMELIMIT_REIMU: float = 12 # 上车功能的时间限制
TIMELIMIT_JD: float = 7 # 日语词典功能的时间限制
TIMELIMIT_TRANSL: float = 7 # 翻译功能的时间限制
TIMELIMIT_ANIME: float = 7 # 搜番功能的时间限制
TIMELIMIT_ANIME: float = 16 # 搜番功能的时间限制

# Bool类
CONFIGURATION_WIZARD: bool = True # 设置每次运行时是否需要确认运行配置向导
Expand Down Expand Up @@ -326,8 +326,8 @@ RSSINTERVAL: dict = {
* `TIMELIMIT_IMAGE` :在 识图 功能中设置的时间限制,单位为(s),如果检索某个API来源时超时的话,会在控制台报出相应的警告,在回复中则不会有对应的内容。请根据服务器的网络环境自行设置,推荐设置在5~10之间。
* `TIMELIMIT_JD` :在 日文词典 功能中设置的时间限制,单位为(s),详细介绍同上。
* `TIMELIMIT_TRANSL` :在 翻译 功能中设置的时间限制,单位为(s),详细介绍同上。
* `TIMELIMIT_ANIME` : 在 搜番 功能中设置的时间限制,单位为(s),详细介绍同上
* `TIMELIMIT_REIMU`在 上车 功能中设置的时间限制,单位为(s),如果检索某个API来源时超时的话,会在控制台报出相应的警告,在回复中则不会有对应的内容。请根据服务器的网络环境和`MAXINFO_REIMU`的值自行设置,推荐设置在9~14之间
* `TIMELIMIT_REIMU` : 在 上车 功能中设置的时间限制,单位为(s),除了包括上面的介绍,还需要根据服务器的网络环境和`MAXINFO_REIMU`的值自行设置,推荐设置在9~14之间
* `TIMELIMIT_ANIME` 在 搜番 功能中设置的时间限制,单位为(s),除了包括上面的介绍,还需要根据服务器的网络环境和`MAXINFO_ANIME`的值自行设置,而且由于需要单独请求一个RSS feed,推荐设置在12~18之间
* Bool类
* `CEICONLYCN` :在 地震速报 功能中是否只报道国内地震,如果只需要报道国内地震请设置为True。推荐设置为True。
* `RECOMMENDER_MUSIC` :在 音乐推荐 功能中是否需要回复显示推荐者。
Expand Down
2 changes: 1 addition & 1 deletion config.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
TIMELIMIT_REIMU: float = 12 # 上车功能的时间限制
TIMELIMIT_JD: float = 7 # 日语词典功能的时间限制
TIMELIMIT_TRANSL: float = 7 # 翻译功能的时间限制
TIMELIMIT_ANIME: float = 7 # 搜番功能的时间限制
TIMELIMIT_ANIME: float = 12 # 搜番功能的时间限制

# Bool类
CONFIGURATION_WIZARD: bool = True # 设置每次运行时是否需要确认运行配置向导
Expand Down
35 changes: 20 additions & 15 deletions xunbot/plugins/anime/data_source.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import requests
from lxml import etree
import time
import feedparser
from urllib import parse

from kth_timeoutdecorator import *
from xunbot import get_bot
Expand All @@ -13,7 +15,7 @@

async def from_anime_get_info(key_word: str) -> str:
repass = ""
url = 'https://share.dmhy.org/topics/list?keyword=' + key_word
url = 'https://share.dmhy.org/topics/rss/rss.xml?keyword=' + parse.quote(key_word)
try:
xlogger.debug("Now starting get the {}".format(url))
repass = await get_repass(url)
Expand All @@ -22,25 +24,28 @@ async def from_anime_get_info(key_word: str) -> str:

return repass


@timeout(TIMELIMIT_ANIME)
async def get_repass(url: str) -> str:
repass = ""
putline = []

html_data = requests.get(url)
html = etree.HTML(html_data.text)

anime_list = html.xpath('//div[@class="clear"]/table/tbody/tr')
if len(anime_list) > MAXINFO_ANIME:
anime_list = anime_list[:MAXINFO_ANIME]

for anime in anime_list:
class_a = anime.xpath('./td[@width="6%"]//font/text()')[0]
title = anime.xpath('./td[@class="title"]/a')[0].xpath('string(.)').strip()
magent_long = anime.xpath('./td/a[@class="download-arrow arrow-magnet"]/@href')[0]
magent = magent_long[:magent_long.find('&')]
size = anime.xpath('./td[last()-4]/text()')[0]

d = feedparser.parse(url)
url_list = [e.link for e in d.entries]

if len(url_list) > MAXINFO_ANIME:
url_list = url_list[:MAXINFO_ANIME]

for u in url_list:
html_data = requests.get(u)
html = etree.HTML(html_data.text)

magent = html.xpath('.//a[@id="a_magnet"]/text()')[0]
title = html.xpath('.//h3/text()')[0]
item = html.xpath('//div[@class="info resource-info right"]/ul/li')
class_a = item[0].xpath('string(.)')[5:].strip().replace("\xa0","").replace("\t","")
size = item[3].xpath('string(.)')[5:].strip()

putline.append("【{}】| {}\n【{}】| {}".format(class_a, title, size, magent))

repass = '\n\n'.join(putline)
Expand Down

0 comments on commit 8c0f180

Please sign in to comment.