This is a simple bot/script I made to publish my CrunchyRoll Guest Passes to Reddit. It uses Selenium and Chromedriver to extract valid guest passes from CrunchyRoll and PRAW to publish it /r/Crunchyroll's weekly Megathread. This is not a bot made to run indefinitely; however, it can be altered to do so if one so desired. It was intended for use in conjunction with a task scheduler/cronjob to check once every month (or four if you wish to publish them in sets before guest passes expire) for new guest passes.
Due to how the PRAW library has changed, all users now must create a reddit script app. As such, the data file now must include additional data. See below for a quick guide on how to set this up.
As of 4.0.0
, binaries for chromedriver
and other tooling will not be included.
Please refer to link
on setting up and installing chromedriver
.
- Log on to the bot account.
- Go the bot account's
preferences
from the upper-right corner. - Click the
apps
tab. - Click the
create another app
. - The button test may appear differently if you have no apps setup. - In the prompts, ensure that the
script
radio button is toggled andredirected uri
is is set tohttp://localhost:8080
. The other fields can be filled with whatever you want. - Click
create app
button when done. - You should now see the app created. Right below the name and below
personal use script
will be yourclient_id
. Within the box, to the right of the wordsecret
, is yourclient_secret
.
You will need to have Chrome installed on your system at its default installation path.
This is due to the chromedriver
working with your Chrome installation to retrieve
Crunchyroll Guest Pass.
Note As of 4.0.0
, chromedriver
will not be provided.
Please refer to link
on setting up.
pip install crunchy-bot
Run crunchy init
to generate config file:
{
"crunchy_username": "crunchy_user",
"crunchy_password": "crunchy_pass",
"reddit_client_id": "client_id",
"reddit_client_secret": "client_secret",
"reddit_user_agent": "CrunchyBot:v4.0.0 (hosted by /u/{YOUR_USERNAME})",
"reddit_username": "reddit_user",
"reddit_password": "reddit_pass",
"log_dir": "/tmp/crunchybot/logs"
}
or save this to ~/.crunchybot
.
Execute crunchy publish [--config path/to/.crunchybot] [--debug/-d]
to start scrapping and publishing.
Assuming you have pyenv
and poetry
installed on your system, run the following within the repo:
pyenv local 3.8 # supports 3.6 - 3.9
poetry shell
This will set up a virtual environment for Crunchybot to work in without interfering your other python projects.
With poetry
initialized, run:
$ poetry install
This will use the poetry.lock
to fetch and verify dependencies.
Install PRAW and Selenium by running the following command:
$ pip install -e .
Once setup with or without poetry
, crunchy
command line should be available for execution.
This will also generate a version.py
using setuptools_scm
.
Make and test your changes locally. Pull Request are welcome.
Run crontab -e
and add
0 0 1 * * zsh -lc "/path/to/crunchy publish"
You can replace zsh -lc
with your shell's equivalent. This is mainly to execute any of your profile
presets that may handle setting up PATH
and other required environment variables to run.
Add the Python script to the Windows Task Scheduler with monthly frequency. Here is a link to setup the Task Scheduler.
Note: Crunchyroll or Cloudflare have flagged ip coming from Github Actions CI. So, it's less likely to succeed now.
You can also fork this repository and utilize Github Actions
to run this task on the first of each month.
You must add the required data as all cap snake case secret variables.