Skip to content

HaceneSadoudi/web-crawler

Repository files navigation

WebCrawl

A web crawler can be used to browse the World Wide Web, typically for the purpose of Web indexing, as in many existing search engines, such as Google, Bing, Yandex, DuckDuckGo, Qwant, and so on.

This software was developed as an assignment during the 2020/21 university year at the CERI, Avignon University (France), by the following students:

  • Abdelhakim RASFI
  • Hacene SADOUDI
  • Youssef ABIDAR
  • imane HACEN
  • Mohamed Kharchouf

Organization

The source code is organized as follows... <list the folders/packages, explain their role>

Installation

Here is the procedure to install this software :

  1. Download java sdk library
  2. Do that
  3. etc.

Use

In order to use the software, you must...

  1. Do this
  2. Do that
  3. etc.

The project wiki (put a hyperlink) contains detailed instructions regarding how to use the web crawler.

Dependencies

The project relies on the following libraries:

  • xxxxx : this library was used to...
  • yyyyy: ...

References

During development, we use the following bibliographic resources:

  • Webpage x: it explains the rules of robots.txt.
  • Book xxxx: it describes how to implement the PageRank algorithm.
  • etc.
  • etc.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages