This Django-based job portal aggregates Django-related job listings from Indeed and LinkedIn using Selenium for web scraping. Focused solely on Django jobs, the portal streamlines the job search process. Additionally, it includes a RESTful API for programmatically accessing job data.
-
Make sure you have Python 3.x installed on your system.
-
Clone the repository:
git clone https://github.com/OZX-OG/JobPortal-django.git
- Set Up Virtual Environment: Ensure you have virtual environments installed on your machine. If not, install them using:
pip install virtualenv
- Create a Virtual Environment: Navigate to the project directory and create a virtual environment:
virtualenv venv
-
Activate the Virtual Environment:
-
On Windows:
venv\Scripts\activate
-
On Unix or MacOS::
source venv/bin/activate
-
-
Install the required packages:
pip install -r requirements.txt
- Start the Server: Once all requirements are installed, start the Django server:
python manage.py runserver
- Access the Portal: Open a web browser and navigate to the URL displayed in the terminal where the Django server is running:
-
Update CELERY_BROKER_URL:
- In
settings.py
, update theCELERY_BROKER_URL
with the URL provided by Railway.CELERY_BROKER_URL = 'redis://default:NhmAigipb1kn3el1N4DaDEpnl1PpNcCe@viaduct.proxy.rlwy.net:39379'
- In
-
Start Celery Worker:
- Open a command prompt and run the following command to start the Celery worker:
celery -A JobPortal worker -l INFO --without-gossip --without-mingle --without-heartbeat -Ofair --pool=solo
- Open a command prompt and run the following command to start the Celery worker:
-
Scraping Data with Selenium:
- To initiate scraping data with Selenium, use the following URLs:
/scrape_linkedin/
: Scrapes job listings from LinkedIn./scrape_indeed/
: Scrapes job listings from Indeed./clear/
: Clears all unavailable jobs from both LinkedIn and Indeed.
- To initiate scraping data with Selenium, use the following URLs:
Now your Celery worker is up and running, and you can start scraping data by accessing the provided URLs. :
-
Random Job Endpoint:
- Endpoint:
/api/random/
- Description: Access this endpoint to retrieve a random job listing without authentication. Each refresh provides a new random job.
- JSON Response Template:
{ "title": "{{ Job Title }}", "company": "{{ Company Name }}", "link": "{{ Job Link }}", "salary": "{{ salary }}", "specific_location": "{{ Location }}", "publish_date": "{{ Publish Date }}" }
- Endpoint:
-
Authenticated Job Listings Endpoint:
- Endpoint:
/api/
- Description: Requires authentication with a token in the header (
'Authorization': 'Token {{ Your Token }}'
) to access all jobs in the database. - JSON Request Header Template (for authentication):
{ "Authorization": "Token {{ Your Token }}" }
- JSON Response Template:
{ "source": "{{ Source }}", "location": ["{{ Location }}"], "title": "{{ Job Title }}", "company": "{{ Company Name }}", "job_link": "{{ Job Link }}", "company_url": "{{ Company URL }}", "image_url": "{{ Image URL }}", "salary": "{{ salary }}", "specific_location": "{{ Location }}", "applications": "{{ Applications }}", "seniority_level": "{{ Seniority Level }}", "employment_type": "{{ Employment Type }}", "Industries": "{{ Industries }}", "post_date": "{{ Post Date }}", "remote": "{{ Remote }}", "publish": "{{ Publish Date }}", "slug": "{{ Slug }}", "is_active": "{{ Is Active }}" }
- Error Message Template (if token not provided):
{ "detail": "Authentication credentials were not provided." }
- Endpoint:
Replace placeholders ({{ }}
) with actual data when using the API endpoints. For the authenticated endpoint, ensure you include the correct token in the request header to access the job listings.
To utilize the job portal:
- Ensure Django is installed and the project dependencies are met.
- Run the Django server.
- Navigate to the portal's URL in a web browser.
- Use the search functionality to find Django job listings fetched from Indeed and LinkedIn.
Contributions are always welcome!
Feel free to submit pull requests or open issues for any enhancements or bug fixes.
We would like to express our gratitude to the following individuals and organizations for their contributions and support:
-
Django Community: Thank you to the Django community for providing an excellent web framework that powers this project.
-
Selenium Project: We extend our appreciation to the Selenium project for their powerful tool that enables automated web testing and scraping.
-
Railway: Special thanks to Railway for providing a Redis instance for one day, which greatly facilitated our task of setting up Celery for asynchronous task processing.