Абстрактный

Auto-Explore the Web â?? Web Crawler

Soumick Chatterjee, Asoke Nath

World Wide Web is an ever-growing public library with hundreds of millions of books without any central management system. Finding a piece of information without a proper directory is like finding a middle in a haystack. Various search engines solve this problem by indexing an amount of the complete content that is available in the internet. For accomplishing this job, search engines use an automated program, known as a web crawler. The most vital job of the web is information retrieval, that too with proper efficiency. Web Crawler helps to accomplish that, by helping search indexing or by helping in making archives. Web Crawler automatically visits all the available links which is further indexed. But, usage of web crawler is not limited to only search engines, but they can also be used for web scrapping, spam filtering, identifying unauthorized use of copyrighted content, identifying illegal and harmful web activities etc. Web Crawler faces various challenges while crawling deep web content, multimedia content etc. Various crawling techniques and various web crawlers are available and discussed in this paper.

Отказ от ответственности: Этот реферат был переведен с помощью инструментов искусственного интеллекта и еще не прошел проверку или верификацию

Индексировано в

Индекс Коперника
Академические ключи
CiteFactor
Космос ЕСЛИ
РефСик
Университет Хамдарда
Всемирный каталог научных журналов
Импакт-фактор Международного инновационного журнала (IIJIF)
Международный институт организованных исследований (I2OR)
Cosmos

Посмотреть больше