Results 31 to 40 of about 1,097,206 (161)
Evaluating the reliability and readability of online information on osteoporosis
Objective: Internet usage for obtaining health-related information is widely popular among patients. However, there are still concerns about the reliability and comprehensibility of online information.
Ozan Volkan Yurdakul +2 more
doaj +1 more source
Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers. [PDF]
Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms.
Mansour Alsaleh, Abdulrahman Alarifi
doaj +1 more source
A three-year study on the freshness of Web search engine databases [PDF]
This paper deals with one aspect of the index quality of search engines: index freshness. The purpose is to analyse the update strategies of the major Web search engines Google, Yahoo, and MSN/Live.com.
Lewandowski, Dirk
core +1 more source
Web‐Teaching ‐ A Guide to Interactive Teaching for the World‐Wide Web by David W. Brooks, New York: Plenum, 1997. ISBN: 0–306–45552–8.
Barker, Philip +7 more
core +2 more sources
An Efficient Technique for Detection of Suspicious Malicious Web Site
In today’s web world web sites became attackers’ main target. Since days before virus signatures had been used to detect malicious web pages. In this paper the malicious web pages will be detected using a prototype system based on the concept
K. Pragadeesh Kumar +2 more
doaj +1 more source
WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks [PDF]
To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks.
Shaojie Qiao +5 more
doaj +1 more source
Metadata Schema Used in OCLC Sampled Web Pages [PDF]
The tremendous growth of Web resources has made information organization and retrieval more and more difficult. As one approach to this problem, metadata schemas have been developed to characterize Web resources.
Fei Yu
doaj
A Hybrid Revisit Policy For Web Search
A crawler is a program that retrieves and stores pages from the Web, commonly for a Web search engine. A crawler often has to download hundreds of millions of pages in a short period of time and has to constantly monitor and refresh the downloaded pages.
Vipul Sharma, Mukesh Kumar, Renu Vig
doaj +1 more source
Automatically attaching web pages to an ontology [PDF]
This paper describes a proposed system for automatically attaching material from the world wide web to concepts in an ontology. The motivation for this research stems from the Diogene project, which requires the project's own databases of learning ...
Crestani, F., Villa, R., Wilson, R.
core +1 more source
Relating Web pages to enable information-gathering tasks [PDF]
We argue that relationships between Web pages are functions of the user's intent. We identify a class of Web tasks - information-gathering - that can be facilitated by a search engine that provides links to pages which are related to the page the user is
Bagchi, Amitabha, Lahoti, Garima
core

