Two previous studies, one done at Stanford in 1997 based on data collected by the Google search engine, and one done at Digital in 1996 based on AltaVista data, revealed that almost a third of the Web consists of duplicate pages. Both studies identified mirroring, that is, the systematic replication of content over a pair of hosts, as the principal cause of duplication, but did not further investigate this phenomenon. The main aim of this paper is to present a clearer picture of mirroring on the Web. As input we used a set of 179 million URLs found during a Web crawl done in the summer of 1998. We looked at all hosts with more than 100 URLs in our input (about 238,000), and discovered that about 10% were mirrored to varying degrees. The paper presents data about the prevalence of mirroring based on a mirroring classification scheme that we define.
There are numerous reasons for mirroring: technical (e.g., to improve access time), commercial (e.g., different intermediaries offering the same products), cultural (e.g., same content in two languages), social (e.g., sharing of research data), and so forth. Although we have not done a exhaustive study of the causes of replication, we discuss and provide examples for several representative cases.
Our technique for detecting mirrored hosts from large sets of collected URLs depends mostly on the syntactic analysis of URL strings, and requires retrieval and content analysis only for a small number of pages. We are able to detect both partial and total mirroring, and handle cases where the content is not byte-wise identical. Furthermore, our technique is computationally very efficient and does not assume that the initial set of URLs gathered from each host is comprehensive. Hence, this approach has practical uses beyond our study, and can be applied in other settings. For instance, for web crawlers and caching proxies, detecting mirrors can be valuable to avoid redundant fetching, and knowledge of mirroring can be used to compensate for broken links.
Keywords: Mirroring, Content Duplication, Smart Proxies, Smart Crawlers, Web Statistics.
Hence, we define two hosts to be mirrors if:
Two previous studies, one from Digital , based on 30 million pages collected by the AltaVista search engine during 1996, and one from Stanford , based on 26 million pages collected by the the Google search engine  during 1997, point out the large amount of page duplication on the Web. The Digital study reported 30% duplication, and the Stanford study about 36% duplication. Both studies identified mirroring, that is, the systematic replication of content over a pair of hosts, as the principal cause of duplication, but did not shed further light on this phenomenon. The main aim of our investigation, based on a set of 179 million URLs, obtained from a web crawl in the summer of 1998, is to present a clearer picture of mirroring on the WWW.
Among the hosts represented in our input list of URLs roughly 238,000 hosts had contributed 100 URL samples or more. We looked for mirrors only in this set of well represented hosts. We discovered that about 10% of the hosts in the set were mirrored by other well represented hosts to varying degrees. In the next section we make this notion more precise, by defining several degrees of mirroring, and in Section 4 we present data about the prevalence and extent of mirroring according to our classification.
There are numerous reasons for mirroring: technical (e.g., replication to improve access time), commercial (e.g., different intermediaries offering the same products), cultural (e.g., same content in two languages), social (e.g., database of shared research), etc. Although we have not done a comprehensive study of the reasons behind mirroring, we present several cases that seem representative.
The technique we devised to detect mirrored hosts from large data sets depends mostly on the syntactic analysis of URL strings and requires fetching and content analysis only for a small number of pages. We use probabilistic tests for establishing the degree of mirroring. This makes our technique computationally very efficient. We are able to detect both partial and total mirroring, and handle cases where the content is not exactly identical. Furthermore, our strategy does not assume that the initial set of URLs from each host is comprehensive. Hence, our technique has practical uses beyond our study, and can be applied to other settings. For example, from the point of view of web crawlers and caching proxies, detecting mirrors can be valuable to avoid redundant fetching and knowledge of the existence of mirrors can be used to compensate for broken links.
The paper proceeds as follows: in Section 2 we establish a classification of mirroring; Section 3 describes our approach to detecting and classifying mirrored hosts; Section 4 presents data from our experiment; Section 5 discusses motives for mirroring; Section 6 presents other applications of this technique; Section 7 mentions related work and in Section 8 we draw some conclusions.
|Level 1 |
Same Structure. Identical Content.
|Level 2 |
Same Structure. Equivalent Content.
|Level 3 |
Same Structure. Similar Content.
| Level 5
Another factor that can be used to detect and classify mirrors is their IP address: mirrors often have related IP addresses. A DNS lookup provides one or more IP addresses for each host. If two hostnames map to a common IP address it indicates that they fetch pages from a common machine. However, this does not confirm a mirroring relationship. This is because a hostname may map to many IP addresses on different machines, and a given IP address may be used to implement virtual hosts that do not have a mirroring relationship. A DNS match helps build confidence about mirroring, but does not confirm it. We do not consider server IP address in our classification scheme.
Stage I: Candidate Pair Detection. In this stage we consider the URLs available from each of the hosts in our database, sample a subset of these URLs and compute syntactic similarity between these subsets. As explained below, we use coordinated sampling to increase the likelihood of selecting the same paths from hosts that happen to be mirrors. The output from this stage is a list of host pairs that are potential mirrors, ordered by likelihood.Note that our tests are probabilistic in nature. That is, it is conceivable that we may fail to recognize a pair of mirrored hosts in Stage I, or we may err in measuring the degree of overlap between them in Stage II. The experimental evidence however suggests that the tests are fairly reliable.
Stage II: Host Pair Classification. In this stage we process the list of candidate host pairs from Stage I, and test in each case if the hosts are indeed mirrors and estimate the extent of their overlap.
We describe each of the stages next.
dec97/sales/charts/... , feb98/sales/charts/..., aug98/sales/charts/...sampled from different hosts, would fail to be correlated if one considered prefixes as features. Therefore, as we explain below, we use pairs of consecutive directories' names along the path (called word bigrams) as features. Matching word bigrams across hosts proves to be very effective.
For large data sets (179 million URLs in our case) reducing the number of features to be considered is a priority. One way to do this is to ignore hosts which contribute a small number of URLs to the collection. Such hosts are either small in size and hence unlikely to have much mirrored content, or poorly sampled, in which case the samples in the collection may not be very representative. We decided to only consider hosts that had at least 100 URLs in our collection. A further data reduction is obtained as follows: for every host we first sort the list of URLs, and then only consider those paths whose strings upon hashing yielded a value which is 0 mod m. The aim of this step is to increase the correlation between the selected paths. If a path is selected on one host, it increases the likelihood that the same path is selected on its mirror. Initially, we take m to be 5. After the first 20 paths are sampled this way, m is doubled, and this doubling is repeated at certain thresholds. The doubling thresholds are picked so that the sample count from a host is sub-linearly proportional to size. This ensures that on the one hand we sample more paths from larger sites than from smaller sites, but on the other hand large sites do not dominate the list of features.
The hostname and the set of paths derived from a given host as described above, contribute a set of features associated with host. Each string is converted to a list of terms by:
(www,infoseek) and (infoseek,com),In fragmenting the path we also associate depth information. This gives us positional word bigrams. For example :
(cellblock,inmates,0)(inmates,dilbert,1)as features. Position is useful because we are trying to find mirror sites that share the same path structure. Finding mirror sites that both mirror content and rename paths is a harder problem, which we do not address.
Finally, to reduce the set of features to be considered and reduce noise in the matching process we do three things:
For each valid feature we write the tuple <feature,host> to a file. At the end of the run, the file of tuples is sorted by the first element of the tuple, namely the feature. For any given feature, this causes all hosts that contain the feature to be listed contiguously. This allows the list to be processed in a single pass in order to compute the similarity between pairs of hosts that share common features. In our feature matching scheme, features are weighted in inverse proportion to the number of hosts they occur in. This corresponds to the usual IDF (inverse document frequency) weighting used in information retrieval. Features that occur in many hosts are considered too common to be significant, and correspondingly their IDF weight is low. As an optimization, we skip all features that occur within more than 20 hosts. For the rest, for every feature f, we consider each pair of hosts that share it and increment the Similarity_Score(Host1, Host2) by the weight of the feature, FW(f).
FW(f) is defined thus:
FW(f) = S(f)/N(f),The Normalized Similarity Score of each pair (Host1, Host2) is computed thus:
where S(f) is a measure of the significance of f independent of its distribution, and N(f) is the number of hosts in which f occurs. We set S(f) to 4 for host features and to 1 for path features, since a host feature match offers evidence that the two sites are part of the same organization (given that we consider only rare features).
N1 and N2 represent the number of URLs in the input from Host1 and Host2 respectively, and are assumed to be proportional to their sizes. The denominator helps normalize the score by host size, to compensate for the fact that large hosts will have more feature matches than small hosts. Our parameters appear to perform satisfactorily but may afford some tuning.
Normalized_Score(Host1, Host2) = Similarity_Score(Host1, Host2) 1 + 0.1*( Log(N1) + Log(N2) )
At the end of this run, the normalized similarity score of every pair of hosts that share a significant feature will have been computed. A list of <score,host-pair> tuples is written to a file. To reduce mismatches we filter out host pairs that have only one feature in common, and also every pair that does not have a path feature in common. The remaining host pairs are sorted in the descending order of normalized similarity score. This optional step reduces false positives at the risk of skipping some mirrored pairs. This results in the output from Stage I, namely a list of prospective mirroring hosts, in the decreasing order of probability.
To classify a host pair we estimate the fraction of the paths from one host that are valid on the other host, and the extent to which the pages referenced by the common paths are similar. Thus, there are two steps: (i) selecting paths to test, and (ii) checking for validity of paths and computing similarity between pages corresponding to the valid paths.
The first step is done only once per host. Since a given host may occur in many host pairs, the same paths are reused. For each pair of host pages, the two root pages (the null path) are always considered. Also, in our experiment, we selected 9 additional paths at random per host from the paths chosen in Stage I, giving a total to 10 path samples per host.
In the second step the selected paths are used to fetch documents from both hosts, namely the source host where the path is known to be valid, and the target host, where the validity of the path is to be tested. The GET could fail at the source or at the target. If we succeed in fetching both pages we check if the contents are the same. This can be done at various levels of tolerance to change. In the strictest instance, this can be done by fingerprinting both documents (i.e., computing a checksum, such that with high probability any two distinct documents will have different checksums). However, in practice we find that content is often non fingerprint-identical. This can happen even on successive GETs to the same server, due to variable server-side includes or dynamically generated web-pages. With mirrored content there is an even greater likelihood of a discrepancy, due to version inconsistencies and local server-side includes. To compensate for this we de-tag the content, ignore whitespace and check for syntactic similarity, that is, closeness of textual content, instead of fingerprint equality. Following  we use the mathematical concept of "resemblance" to capture the informal notion of syntactic similarity.
The resemblance, r(A, B), of two documents
A and B is defined as follows: first each document is
transformed into a set of word k-grams, S(A) and S(B),
also called "shingles".
Then we compute
where |S| is the size of the set S.
By considering structural resemblance and/or content resemblance (with different similarity thresholds), the host pairs were classified into one of many levels. For our experiment we defined the threshold for high similarity as 50%. Additionally, we used a 100% threshold to check for full similarity and a 0% threshold for trace similarity. When all similarity checks fail, the pair is classified as a Mismatch.
Between any pair of hosts there are 19 web page comparisons in all: a pair of root pages + 9 source pages sampled from each host. In each case one of the following outcomes is possible
|Level 1||All tests show SF or FM and not all are SF.||Same Structure. Identical Content.|
|Level 2||All tests show SF or FM or FS, and not all are SF||Same Structure. Equivalent Content.|
|Level 3||All tests show SF or FM or FS or HS, and not all are SF.||Same Structure. Similar Content.|
|Level 4||Some tests show HS or FM.||Structure is partially replicated. For replicate paths, content is similar.|
|Level 5||No test yields TF and at least one with TS.||Same structure (since all paths are valid). Some of the content appears related.|
|Mismatch||All tests result in TF or NS.||Not similar|
|DNS failure||DNS lookup failed for one of the hosts||No information. Presently inaccessible|
|Server failure||One of the two servers was inaccessible.||No information. Presently inaccessible|
Table 1: Classification of Host Pairs
See Section 2 for a definition of these levels of similarity. Note that our test for Level 5 similarity is stricter than the definition in Section 2, which does not require trace similarity testing. This criterion was added to compensate for the case when all paths seem valid on the target host because it returns its own error page for mismatches instead of a 404 HTTP result. Also note that our tests are only statistical tests for the level of mirroring inferred. It is entirely possible that the actual level of mirroring is different, although the evidence suggests that our tests are fairly reliable.
In Stage I, we selected 238,679 hosts to process as described in the
previous section. By sampling on average 20.8 paths per host, we obtained
about 2 million features that matched our selection criteria. Using the
similarity weighting and host pair filtering technique described in Section
3.1 we produced a list of 29,336 host pairs to be validated at the end
of Stage I (see table below).
|Input Set of URLs||179,369,374|
|Stage I Processing|
|Hosts considered (with > 100 links)||238,679|
|Paths sampled (~20.8 per host)||4,970,064|
|Number of candidate host pairs after Stage I||29,336|
|Stage II Processing|
|Total Number of Successful Tests||21,820|
|Level 1 Matches||9,619|
|Level 2 Matches||700|
|Level 3 Matches||406|
|Level 4 Matches||6,011|
|Level 5 Matches||189|
|Total Number of Unsuccessful Tests||7,516|
|DNS Failure (Invalid Server Name)||3,539|
|Mirrored Host Pairs Found (Level 5 match or better)||16,925|
|Distinct Hosts that are Mirrors (Level 5 or better)||22,363
(9.4% of hosts)
Table 2: Details of the Experiment
In Stage II, not counting failures, we had 21,820 successful tests. Of these 16,925 were found to be mirrors and 4,895 were mismatches. We found 22,363 hosts with a Level 5 mirroring relationship or better. This corresponds to about 9.4% of our total set of large hosts.
When one considers hosts that are fully replicated (Levels 1-3), the most common case is when a "com" site is mirrored as a "org" site. Almost as frequent is the "com-net" combination. We then see several instances of a "com" site being mirrored under a country's domain.
The "com-edu" combination seems to occur more with partial replication (Level 4). This suggests the sharing of databases rather than a replication of the site's main functionality.
Figure 2 shows a cluster of research sites that we found to have Level 4 similarity because they all had copies of the Protein Data Bank (PDB). The PDB is an archive of experimentally determined three-dimensional structures of biological macromolecules, shared by researchers world-wide. Our experiment revealed almost all the pair-wise mirroring relationships between hosts containing the PDB database which appeared in the Stage I short-list. We discovered that there are some other PDB mirrors that did not make it to the short-list, either because they did not appear in our original list or because fewer than 100 URLs were retrieved from them. This suggests that a more comprehensive list of URLs and a lower inclusion threshold might have identified all of them.
Figure 2: Protein Data Bank Cluster. The edges indicate cross-domain Level 4
similarity between sites that have a mirrored copy of the Protein Data Bank.
With structural cross-domain replication (Level 5) we tend to find web sites by the same organization mirrored in other languages or catering to local content. E.g., (www.yahoo.ca, www.yahoo.co.jp) or (infoart.stanet.ru, infoart.baku.az).
|Rank Based on
Frequency of Occurrence
|1||com-org (12.1%)||com-net (14.3%)||com-net (19.23%)|
|2||com-net (10.06%)||com-edu (3.08%)||ca-edu (19.23%)|
|3||com-de (5.17%)||com-org (3.08%)||edu-us (11.53%)|
|4||com-uk (4.89%)||de-edu (2.94%)||ca-com (11.53%)|
|5||com-fr (4.48%)||ca-com (2.32%)||az-ru (11.53%)|
|6||edu-org (3.8%)||com-de (2.19%)||com-sg (7.69%)|
|7||com-jp (3.53%)||au-com (2.12%)||cl-org (3.84%)|
|8||net-org (2.44%)||edu-uk (1.77%)||kz-ru (3.84%)|
|9||at-com (2.44%)||edu-jp (1.5%)||mx-org (3.84%)|
|10||com-kr (2.17%)||com-fr (1.5%)||ca-jp (3.84%)|
Table 3: Distribution of Cross-Domain Mirrors
Level 1 Examples (Cross Domain)
Level 2 Examples
These sites do virtual hosting from the same server, and have content that differs in minor ways:
Level 3 Examples
These are hosted by the same server but appear to be different because of dynamic content (a visitor counter).
www.dancecorner.com(20 samples)Level 4 Examples
These are mirror sites in French and English. They have the same path hierarchy but with translated content. This is characteristic of Canadian organizations with bilingual content. Some pages are common, so this does not appear to be Level 5.
www.cbsc.org(56 samples)These mirrors have identical content, but with French accented characters replaced with their HTML entity equivalents in one case. Consequently the pages that were compared appeared to have high or trace similarity rather than full similarity or fingerprint equivalence.
bleue.ac-aix-marseille.fr(20 samples)These are 2 different school sites, built with the same tool (Microsoft Communication Tool for Schools). Hence, they have a similar hierarchy, and layout, but differ in content.
wchs02.washington.high.washington.k12.ga.us(20 samples)Although these have different IP addresses and their root pages are different, paths from one source seem to produce identical pages on the other. This led us to the discovery that DesertNet implements the newspaper, Tucson Weekly.
www.desert.net(111 samples)Level 5 Examples
These two differ only in their encoding of Cyrillic. The content is the same.
www-ibm866.consultant.ru(20 samples)Two libraries that use the same query engine. Consequently queries on one are valid on the other.
leagle.wcl.american.edu(127 samples)Same content, mirrored in Spanish and English.
We found that there are many incentives to replicate data at the present time:
1. Load Balancing: Replication decreases server load. E.g., www1.geocities.com and www14.geocities.com
2. High Availability: The Protein Data Bank example presented in Section 4.2 is an instance of geographical mirroring for high availability.
3. Multi-lingual replication: The same data is made available in many languages. For example several Canadian sites have mirrors that only differ in the language used, French or English.
4. Franchises/Local Versions: This happens when content is licensed to other parties. E.g., quicken.excite.com and quicken.com have the same content, except for some site-specific branding.
5. Database Sharing: Two independent sites may share the same database or filesystem leading to mirroring, unintentionally. E.g., www.desert.net and www.tucsonweekly.com.
6. Virtual Hosting: These represent services that use the same IP address (and hence the same server) but implement two different web sites based on the host name. E.g. sports.catalogue.com and www.accsports.com. In this case paths on one are valid on the other and yield identical pages. This need not always be so. In another case, www.autotune.com and www.borg-warner.com, the two virtual hosts share the same IP address but paths on one may not be valid on the other, and common paths lead to dissimilar content.
7. Maintaining Pseudo Identities: Often, the incentive for such replication is to "spam" search engines with seemingly different web sites that in fact have the same content, hoping that one of them gets listed at the top of the ranking order. E.g., two "adult" sites www.discountdvd.com and www.xxxpass.com.
Several recent ranking algorithms that rely on connectivity (e.g., [1, 4, 7]) treat hyperlinks within a host differently from hyperlinks across hosts. Hyperlinks in the latter case are considered indicative of quality, since they suggest a form of endorsement across organizations. Mirroring information has the potential to make these algorithms more robust and accurate, because it can identify organizational bounders that span many hosts.
Broken links on the WWW are a common problem. The use of publishing tools and inter-server communication protocols has been advocated [5, 6] to preserve referential integrity when links change. Until such a protocol is adopted globally, and legacy servers are retired, we will continue to see "Page Not Found" errors. When a broken link refers to a known replicated host, mirroring information can be used to fetch an equivalent copy of the page.
To our surprise, we found that our scheme was even able to find "pseudo mirroring" relationships involving misspelt hostnames (e.g., geocities.com misspelt as geocites.com). This suggests that given a large sample of user requests, as in a proxy log or a large crawl of the Web, this strategy might be able to correct frequent typographical errors.
Many researchers (e.g., [9, 8]) have looked at the use of cooperating web proxies to reap the benefits of caching, say within an Intranet where a certain degree of access locality can be expected. Others (e.g., [15, 16]) have studied hierarchical caching. On similar lines, one could imagine a set of cooperating proxies that periodically pool the sets of URLs they encounter to mine mirroring information that they can all use. This could be used to boot-strap a more pro-active scheme such as a mirror registry, where host administrators can register information about mirroring.
In some respects, our validation technique is similar to the one used in the Ahoy!  home finder engine. Given a person's name and affiliation, Ahoy! generates various possible home page URLs and validates them by trying to fetch them. The templates for these URLs are computed by syntactic analysis of URL strings - specifically home page URLs encountered previously on the same host.
Statistics from the Online Computer Library Center based on a sample of about 3000 hosts generated by random sampling of IP addresses (See ) lead to an estimate of the existence of 1.7 million public web sites as of June 1998, of which about 12% were thought to be mirrored. Their result is not directly comparable to our 9.4% estimate since their study did not consider arbitrary mirrors, but only pairs of IP addresses that either map to the same hostname or share the first three octets. These pairs were tested to see if they have an HTTP server that returns identical root pages. On the other hand, we only considered mirroring relationship among the hosts that were well represented in our collection.
Our technique is efficient because it operates syntactically on URL strings and requires fetching only about 20 pages per prospective mirrored host for confirmatory content analysis. We advocate its use for detecting mirrors in web proxies and crawlers in order to reduce redundant fetching and to improve caching behavior and reliability.
|||K. Bharat, M.Henzinger, Improved Algorithms for Topic Distillation in Hyperlinked Environments, in: Proc. ACM SIGIR '98, pp. 111-104, 1998. ftp://ftp.digital.com/pub/DEC/SRC/publications/monika/sigir98.pdf|
|||S. Brin, L. Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, in: Proc. 7th International WWW Conference, Brisbane, Australia, April 1998. http://www7.scu.edu.au/programme/fullpapers/1921/com1921.htm|
|||A. Broder, S. Glassman, M. Manasse, G. Zweig, Syntactic Clustering of the Web, in: Proc. 6th International WWW Conference, 1997. http://decweb.ethz.ch/WWW6/Technical/Paper205/Paper205.html|
|||S. Chakrabarti, B. Dom, P. Raghavan, S. Rajagopalan, D. Gibson, J. Kleinberg, Automatic Resource Compilation by Analyzing Hyperlink Structure and Associated Text, in: Proc. 7th International WWW Conference, Brisbane, Australia, April 1998. http://www7.scu.edu.au/programme/fullpapers/1898/com1898.html|
|||M.L. Creech, Author-Oriented Link Management, in: Proc. 5th International WWW Conference, Paris, France, May 1996.http://www5conf.inria.fr/fich_html/papers/P11/Overview.html|
|||R.K. Jones, J. Pitkow, Supporting the Web: a Distributed Hyperklink Database, in: Proc. 5th International WWW Conference, Paris, France, May 1996.http://www5conf.inria.fr/fich_html/papers/P10/Overview.html|
|||J. Kleinberg, Authoritative Sources in a Hyperlinked Environment, in: Proc. ACM SIAM Symposium on Discrete Algorithms, pp. 668-677, 1998. http://www.cs.cornell.edu/home/kleinber/auth.ps|
|||P. Krishnan, B. Sugla, Utility of Co-operating Web Proxy Caches, in: Proc. 7th International WWW Conference, Brisbane, Australia, April 1998.http://www7.scu.edu.au/programme/fullpapers/1892/com1892.htm|
|||R. Malpani, J. Lorch, D. Berger, Making World Wide Web Caching Servers Cooperate, in: Proc. 4th International WWW Conference, Boston, MA, December 1995.|
|||E.T. O'Neill, P.D. McClain, B.F. Lavoie, A Methodology for Sampling the World Wide Web, in: OCLC Annual Review of Research, Online Computer Library Center Inc., 1997. http://www.oclc.org/oclc/research/publications/review97/oneill/o'neillar980213.htm|
|||J. Selingo, INFORMATION TECHNOLOGY: In Attempting to Archive the Entire Internet, a Scientist Develops a New Way to Search It, in: The Chronicle of Higher Education, Washington D.C., March 6, 1998. http://chronicle.com/data/articles.dir/art-44.dir/issue-26.dir/26a02701.htm|
|||J. Shakes, M. Langheinrich, O. Etzioni, Dynamic Reference Sifting - Case Study in the Homepage Domain, in: Proc. 6th International WWW Conference, 1997.http://ahoy.cs.washington.edu:6060/doc/paper.html|
|||N. Shivakumar, H. Garcia-Molina, Finding Near-Replicas of Documents on the Web in: Proc. Workshop on Web Databases (WebDB'98). http://www-db.stanford.edu/~shiva/Pubs/web.ps|
|||K. Sparck-Jones, P. Willett, Readings in Information Retrieval, Morgan Kaufman, 1997.|
|||D. Wessels, Configuring Hierarchical Squid Caches, Online Tutorial, http://squid.nlanr.net/Squid/Hierarchy-Tutorial/, 1997.|
|||P.S. Yu, E.A. MacNair, Performance Study of a Collaborative Method for Hierarchical Caching in Proxy Servers, in: Proc. 7th International WWW Conference, Brisbane, Australia, April 1998. http://www7.scu.edu.au/programme/fullpapers/1829/com1829.htm|
|Krishna Bharat is a member of the research staff at Compaq Computer Corporation's Systems Research Center in Palo Alto, California. His current research interests include Web content discovery and retrieval, user interface issues in Web task automation, and interaction with small devices. He received his Ph.D. in Computer Science from Georgia Institute of Technology in 1996, where he worked on tool and infrastructure support for building distributed user interface applications.|
|Andrei Broder has a B.Sc. from Technion, Israel Institute of Technology and a M.Sc. and Ph.D. in Computer Science from Stanford University. He is a member of the research staff at Compaq Computer Corporation's Systems Research Center in Palo Alto, California. His main interests are the design, analysis, and implementation of probabilistic algorithms and supporting data structures, in particular in the context of Web-scale applications.|