how to design robust algorithms using noisy comparison oracle

Post on 23-Mar-2022

11 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

How to Design Robust Algorithms using Noisy ComparisonOracle

Raghavendra AddankiUMass Amherst

raddanki@cs.umass.edu

Sainyam GalhotraUMass Amherst

sainyam@cs.umass.edu

Barna SahaUC Berkeley

barnas@berkeley.edu

ABSTRACT

Metric based comparison operations such as finding maximum,nearest and farthest neighbor are fundamental to studying variousclustering techniques such asπ‘˜-center clustering and agglomerativehierarchical clustering. These techniques crucially rely on accurateestimation of pairwise distance between records. However, com-puting exact features of the records, and their pairwise distancesis often challenging, and sometimes not possible. We circumventthis challenge by leveraging weak supervision in the form of acomparison oracle that compares the relative distance between thequeried points such as β€˜Is point 𝑒 closer to 𝑣 or𝑀 closer to π‘₯?’.

However, it is possible that some queries are easier to answerthan others using a comparison oracle. We capture this by introduc-ing two different noise models called adversarial and probabilisticnoise. In this paper, we study various problems that include findingmaximum, nearest/farthest neighbor search under these noise mod-els. Building upon the techniques we develop for these comparisonoperations, we give robust algorithms for π‘˜-center clustering andagglomerative hierarchical clustering.We prove that our algorithmsachieve good approximation guarantees with a high probabilityand analyze their query complexity. We evaluate the effectivenessand efficiency of our techniques empirically on various real-worlddatasets.

PVLDB Reference Format:

Raghavendra Addanki, Sainyam Galhotra, Barna Saha. How to Design Ro-bust Algorithms using Noisy Comparison Oracle. PVLDB, 14(9): XXX-XXX,2021. doi:XX.XX/XXX.XX

1 INTRODUCTION

Many real world applications such as data summarization, socialnetwork analysis, facility location crucially rely on metric basedcomparative operations such as finding maximum, nearest neigh-bor search or ranking. As an example, data summarization aims toidentify a small representative subset of the data where each repre-sentative is a summary of similar records in the dataset. Popularclustering algorithms such as π‘˜-center clustering and hierarchicalclustering are often used for data summarization [25, 39]. In this pa-per, we study fundamental metric based operations such as findingmaximum, nearest neighbor search, and use the developed tech-niques to study clustering algorithms such as π‘˜-center clusteringand agglomerative hierarchical clustering.

Clustering is often regarded as a challenging task especially dueto the absence of domain knowledge, and the final set of clusters

This work is licensed under the Creative Commons BY-NC-ND 4.0 InternationalLicense. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy ofthis license. For any use beyond those covered by this license, obtain permission byemailing info@vldb.org. Copyright is held by the owner/author(s). Publication rightslicensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 14, No.9 ISSN 2150-8097. doi:XX.XX/XXX.XX

1 2 3

654

Figure 1: Data summarization example

identified can be highly inaccurate and noisy [7]. It is often hardto compute the exact features of points and thus pairwise distancecomputation from these feature vectors could be highly noisy. Thiswill render the clusters computed based on objectives such as π‘˜-center unreliable.

To address these challenges, there has been a recent interest toleverage supervision from crowd workers (abstracted as an oracle)which provides significant improvement in accuracy but at an addedcost incurred by human intervention [20, 55, 57]. For clustering, theexisting literature on oracle based techniques mostly use optimal

cluster queries, that ask questions of the form β€˜do the points u andv belong to the same optimal cluster?’[6, 17, 42, 57]. The goal isto minimize the number of queries aka query complexity whileensuring high accuracy of clustering output. This model is relevantfor applications where the oracle (human expert or a crowd worker)is aware of the optimal clusters such as in entity resolution [20, 55].However, in most applications, the clustering output depends highlyon the required number of clusters and the presence of other records.Without a holistic view of the entire dataset, answering optimalqueries may not be feasible for any realistic oracle. Let us consideran example data summarization task that highlights some of thechallenges.

Example 1.1. Consider a data summarization task over a collection

of images (shown in Figure 1). The goal is to identify π‘˜ images (say

π‘˜ = 3) that summarize the different locations in the dataset. The

images 1, 2 refer to the Eiffel tower in Paris, 3 is the Colosseum in

Rome, 4 is the replica of Eiffel tower at Las Vegas, USA, 5 is Veniceand 6 is the Leaning tower of Pisa. The ground truth output in this

case would be {{1, 2}, {3, 5, 6}, {4}}. We calculated pairwise similarity

between images using the visual features generated fromGoogle Vision

API [1]. The pair (1, 4) exhibits the highest similarity of 0.87, while allother pairs have similarity lower than 0.85. Distance between a pair ofimages𝑒 and 𝑣 , denoted as𝑑 (𝑒, 𝑣), is defined as (1βˆ’similarity between

𝑒 and 𝑣). We ran a user experiment by querying crowd workers to

answer simple Yes/No questions to help summarize the data (Please

refer to Section 6.2 for more details).

In this example, we make the following observations.

arX

iv:2

105.

0578

2v1

[cs

.DS]

12

May

202

1

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

β€’ Automated clustering techniques generate noisy clusters.

Consider the greedy approach for π‘˜-center clustering [27] whichsequentially identifies the farthest record as a new cluster center. Inthis example, records 1 and 4 are placed in the same cluster by thegreedy π‘˜-center clustering, thereby leading to poor performance.In general, automated techniques are known to generate erroneoussimilarity values between records due to missing informationor even presence of noise [19, 56, 58]. Even Google’s landmarkdetection API [1] did not identify the location of images 4 and 5.β€’ Answering pairwise optimal cluster query is infeasible.

Answering whether 1 and 3 belong to the same optimal clusterwhen presented in isolation is impossible unless the crowd workeris aware of other records present in the dataset, and the granularityof the optimum clusters. Using the pair-wise Yes/No answersobtained from the crowd workers for the

(62)pairs in this example,

the identified clusters achieved 0.40 F-score for π‘˜ = 3. Please referto Section 6.2 for additional details.β€’ Comparing relative distance between the locations is easy.

Answering relative distance queries of the form β€˜Is 1 closer to 3,or is 5 closer to 6?’ does not require any extra knowledge aboutother records in the dataset. For the 6 images in the example, wequeried relative distance queries and the final clusters constructedfor π‘˜ = 3 achieved an F-score of 1.

In summary, we observe that humans have an innate under-standing of the domain knowledge and can answer relative distancequeries between records easily. Motivated by the aforementionedobservations, we consider a quadruplet comparison oracle that com-pares the relative distance between two pairs of points (𝑒1, 𝑒2) and(𝑣1, 𝑣2) and outputs the pair with smaller distance between thembreaking ties arbitrarily. Such oracle models have been studied ex-tensively in the literature [11, 17, 24, 32, 34, 48, 49]. Even thoughquadruplet queries are easier than binary optimal queries, someoracle queries maybe harder than the rest. In a comparison query, ifthere is a significant gap between the two distances being compared,then such queries are easier to answer [9, 15]. However, when thetwo distances are close, the chances of an error could increase. Forexample, β€˜Is location in image 1 closer to 3, or 2 is closer to 6?’maybe difficult to answer.

To capture noise in quadruplet comparison oracle answers, weconsider two noise models. In the first noise model, when the pair-wise distances are comparable, the oracle can return the pair ofpoints that are farther instead of closer. Moreover, we assume thatthe oracle has access to all previous queries and can answer queriesby acting adversarially. More formally, there is a parameter ` > 0such that if max𝑑 (𝑒1,𝑒2),𝑑 (𝑣1,𝑣2)

min𝑑 (𝑒1,𝑒2),𝑑 (𝑣1,𝑣2) ≀ (1+`), then adversarial error mayoccur, otherwise the answers are correct. We call this the "Adver-sarial Noise Model". In the second noise model called "ProbabilisticNoise Model", given a pair of distances, we assume that the oracleanswers correctly with a probability of 1βˆ’π‘ for some fixed constant𝑝 < 1

2 . We consider a persistent probabilistic noise model, whereour oracle answers are persistent, i.e., query responses remain un-changed even upon repeating the same query multiple times. Suchnoise models have been studied extensively [9, 10, 20, 24, 42, 46]since the error due to oracles often does not change with repetition,and in some cases increases upon repeated querying [20, 42, 46].

This is in contrast to the noise models studied in [17] where re-sponse to every query is independently noisy. Persistent querymodels are more difficult to handle than independent query modelswhere repeating each query is sufficient to generate the correctanswer by majority voting.

1.1 Our Contributions

We present algorithms for finding the maximum, nearest and far-thest neighbors, π‘˜-center clustering and hierarchical clusteringobjectives under the adversarial and probabilistic noise model us-ing comparison oracle. We show that our techniques have provableapproximation guarantees for both the noise models, are efficientand obtain good query complexity. We empirically evaluate therobustness and efficiency of our techniques on real world datasets.(i) Maximum, Farthest and Nearest Neighbor: Finding maxi-mum has received significant attention under both adversarial andprobabilistic models [4, 9, 15, 18, 21–23, 38]. In this paper, we pro-vide the following results.β€’Maximum under adversarial model. We present an algo-rithm that returns a value within (1 + `)3 of the maximum amonga set of 𝑛 values 𝑉 with probability 1 βˆ’ 𝛿1 using 𝑂 (𝑛 log2 (1/𝛿))oracle queries and running time (Theorem 3.6).β€’Maximum under probabilistic model. We present analgorithm that requires 𝑂 (𝑛 log2 (𝑛/𝛿)) queries to identify𝑂 (log2 (𝑛/𝛿))th rank value with probability 1 βˆ’ 𝛿 (Theorem 3.7). Inother words, in 𝑂 (𝑛 log2 (𝑛)) time we can identify 𝑂 (log2 (𝑛))thvalue in the sorted order with probability 1βˆ’ 1

𝑛𝑐 for any constant 𝑐 .To contrast our results with the state of the art, Ajtai et al. [4] study aslightly different additive adversarial error model where the answerof a maximum query is correct if the compared values differ by \(for some \ > 0) and otherwise the oracle answers adversarially.Under this setting, they give an additive 3\ -approximation with𝑂 (𝑛) queries. Although, our model cannot be directly comparedwith theirs, we note that our model is scale invariant, and thus,provides a much stronger bound when distances are small. As aconsequence, our algorithm can be used under additive adversarialmodel as well, and obtaining the same approximation guarantees(Theorem 3.10).

For the probabilistic model, after a long series of works [9, 21, 23,38], only recently an algorithm is proposed with query complexity𝑂 (𝑛 log𝑛) that returns an 𝑂 (log𝑛)th rank value with probability1 βˆ’ 1

𝑛 [22]. Previously, the best query complexity was 𝑂 (𝑛3/2) [23].While our bounds are slightly worse than [22], our algorithm issignificantly simpler.

Rest of the work in finding maximum allow repetition of queriesand assume the answers are independent [15, 18]. As discussedearlier, persistent errors are much more difficult to handle thanindependent errors. In [18], the authors present an algorithm thatfinds maximum using𝑂 (𝑛 log 1/𝛿) queries and succeeds with prob-ability 1 βˆ’ 𝛿 . Therefore, even under persistent errors, we obtainguarantees close to the existing ones which assume independenterror. The algorithms of [15, 18] do not extend to our model.β€’ Nearest Neighbor. Nearest neighbor queries can be cast asβ€œfinding minimum” among a set of distances. We can obtain bounds1𝛿 is the confidence parameter and is standard in the literature of randomizedalgorithms.

How to Design Robust Algorithms using Noisy Comparison Oracle

similar to finding maximum for the nearest neighbor queries. In theadversarial model, we obtain an (1 + `)3-approximation, and in theprobabilistic model, we are guaranteed to return an element withrank𝑂 (log2 (𝑛/𝛿)) with probability 1βˆ’π›Ώ using𝑂 (𝑛 log2 (1/𝛿)) and𝑂 (𝑛 log2 (𝑛/𝛿)) oracle queries respectively.

Prior techniques have studied nearest neighbor search undernoisy distance queries [41], where the oracle returns a noisy es-timate of a distance between queried points, and repetitions areallowed. Neither the algorithm of [41], nor other techniques de-veloped for maximum [4, 18] and top-π‘˜ [15] extend for nearestneighbor under our noise models.β€’ Farthest Neighbor. Similarly, the farthest neighbor query canbe cast as finding maximum among a set of distances, and theresults for computing maximum extends to this setting. However,computing the farthest neighbor is one of the basic primitives formore complex tasks likeπ‘˜-center clustering, and for that, the existingbounds under the probabilistic model that return an𝑂 (log𝑛)th rankelement is insufficient. Since distances on a metric space satisfiestriangle inequality, we exploit it to get a constant approximationto the farthest query under the probabilistic model and a milddistribution assumption (Theorem 3.10).(ii) π‘˜-center Clustering: π‘˜-center clustering is one of the funda-mental models of clustering and is very well-studied [52, 59].β€’ π‘˜-center under adversarial model We design algorithm thatreturns a clustering that is a 2 + ` approximation for small valuesof ` with probability 1 βˆ’ 𝛿 using 𝑂 (π‘›π‘˜2 + π‘›π‘˜ log2 (π‘˜/𝛿)) queries(Theorem 4.2). In contrast, even when exact distances are known, π‘˜-center cannot be approximated better than a 2-factor unless 𝑃 = 𝑁𝑃

[52]. Therefore, we achieve near-optimal results.β€’ π‘˜-center under probabilistic noise model. For probabilisticnoise, when optimal π‘˜-center clusters are of size at least Ξ©(

βˆšπ‘›), our

algorithm returns a clustering that achieves constant approximationwith probability 1βˆ’π›Ώ using𝑂 (π‘›π‘˜ log2 (𝑛/𝛿)) queries (Theorem 4.4).To the best of our knowledge, even though π‘˜-center clustering isan extremely popular and basic clustering paradigm, it hasn’t beenstudied under the comparison oracle model, and we provide thefirst results in this domain.(iii) Single Linkage and Complete Linkage– Agglomerative

Hierarchical Clustering : Under adversarial noise, we show aclustering technique that loses only amultiplicative factor of (1+`)3in each merge operation and has an overall query complexity of𝑂 (𝑛2). Prior work [24] considers comparison oracle queries to per-form average linkage in which the unobserved pairwise similaritiesare generated according to a normal distribution. These techniquesdo not extend to our noise models.

1.2 Other Related Work

For finding the maximum among a given set of values, it is knownthat techniques based on tournament obtain optimal guarantees andare widely used [15]. For the problem of finding nearest neighbor,techniques based on locality sensitive hashing generally work wellin practice [5]. Clustering points using π‘˜-center objective is NP-hard and there are many well known heuristics and approximationalgorithms [59] with the classic greedy algorithm achieving anapproximation ratio of 2. All these techniques are not applicable

when pairwise distances are unknown. As distances between pointscannot always be accurately estimated, many recent techniquesleverage supervision in the form of an oracle. Most oracle basedclustering frameworks consider β€˜optimal cluster’ queries [13, 28,33, 42, 43] to identify ground truth clusters. Recent techniques fordistance based clustering objectives, such as π‘˜-means [6, 12, 36, 37]and π‘˜-median [3] use optimal cluster queries in addition to distanceinformation for obtaining better approximation guarantees. Asβ€˜optimal cluster’ queries can be costly or sometimes infeasible, therehas been recent interest in leveraging distance based comparisonoracles for other problems similar to our quadruplet oracles [17, 24].

Distance based comparison oracles have been used to studya wide range of problems and we list a few of them – learningfairness metrics [34], top-down hierarchical clustering with a dif-ferent objective [11, 17, 24], correlation clustering [49] and classi-fication [32, 48], identify maximum [30, 53], top-π‘˜ elements [14–16, 38, 40, 45], information retrieval [35], skyline computation [54].To the best of our knowledge, there is no work that considersquadruplet comparison oracle queries to perform π‘˜-center cluster-ing and single/complete linkage based hierarchical clustering.

Closely related to finding maximum, sorting has also been wellstudied under various comparison oracle based noise models [8, 9].The work of [15] considers a different probabilistic noise modelwith error varying as a function of difference in the values but theyassume that each query is independent and therefore repetition canhelp boost the probability of success. Using a quadruplet oracle, [24]studies the problem of recovering a hierarchical clustering under aplanted noise model and is not applicable for single linkage.

2 PRELIMINARIES

Let 𝑉 = {𝑣1, 𝑣2, . . . , 𝑣𝑛} be a collection of 𝑛 records such that eachrecord maybe associated with a value π‘£π‘Žπ‘™ (𝑣𝑖 ),βˆ€π‘– ∈ [1, 𝑛]. We as-sume that there exists a total ordering over the values of elementsin 𝑉 . For simplicity we denote the value of record 𝑣𝑖 as 𝑣𝑖 insteadof π‘£π‘Žπ‘™ (𝑣𝑖 ) whenever it is clear from the context.

Given this setting, we consider a comparison oracle that com-pares the values of any pair of records (𝑣𝑖 , 𝑣 𝑗 ) and outputs Yes if𝑣𝑖 ≀ 𝑣 𝑗 and No otherwise.

Definition 2.1 (Comparison Oracle). An oracle is a function

O : 𝑉 ×𝑉 β†’ {Yes, No}. Each oracle query considers two values as

input and outputs O(𝑣1, 𝑣2) = Yes if 𝑣1 ≀ 𝑣2 and No otherwise.

Note that a comparison oracle is defined for any pair of values.Given this oracle setting, we define the problem of identifying themaximum over the records 𝑉 .

Problem 2.2 (Maximum). Given a collection of 𝑛 records 𝑉 =

{𝑣1, . . . , 𝑣𝑛} and access to a comparison oracle O, identify the

argmax𝑣𝑖 βˆˆπ‘‰ 𝑣𝑖 with minimum number of queries to the oracle.

As a natural extension, we can also study the problem of identi-fying the record corresponding to the smallest value in 𝑉 .

2.1 Quadruplet Oracle Comparison Query

In applications that consider distance based comparison of recordslike nearest neighbor identification, the records 𝑉 = {𝑣1, . . . , 𝑣𝑛}are generally considered to be present in a high-dimensional metricspace along with a distance 𝑑 : 𝑉 ×𝑉 β†’ R+ defined over pairs of

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

records. We assume that the embedding of records in latent spaceis not known, but there exists an underlying ground truth [5]. Priortechniques mostly assume complete knowledge of accurate distancemetric and are not applicable in our setting. In order to capture thesetting where we can compare distances between pair of records,we define quadruplet oracle below.

Definition 2.3 ( Quadruplet Oracle). An oracle is a function

O : 𝑉 ×𝑉 ×𝑉 ×𝑉 β†’ {Yes, No}. Each oracle query considers two pairsof records as input and outputs O(𝑣1, 𝑣2, 𝑣3, 𝑣4) = Yes if 𝑑 (𝑣1, 𝑣2) ≀𝑑 (𝑣3, 𝑣4) and No otherwise.

The quadruplet oracle is similar to the comparison oracle dis-cussed before with a difference that the two values being comparedare associated with pair of records as opposed to individual records.Given this oracle setting, we define the problem of identifying thefarthest record over 𝑉 with respect to a query point π‘ž as follows.

Problem 2.4 (Farthest point). Given a collection of 𝑛 records

𝑉 = {𝑣1, . . . , 𝑣𝑛}, a query record π‘ž and access to a quadruplet oracle

O, identify argmax𝑣𝑖 βˆˆπ‘‰ \{π‘ž } 𝑑 (π‘ž, 𝑣𝑖 ).

Similarly, the nearest neighbor query returns a point that satis-fies argmin𝑒𝑖 βˆˆπ‘‰ \{π‘ž } 𝑑 (π‘ž,𝑒𝑖 ). Now, we formally define the k-centerclustering problem.

Problem 2.5 (k-center clustering). Given a collection of 𝑛

records𝑉 = {𝑣1, . . . , 𝑣𝑛} and access to a comparison oracle O, identifyπ‘˜ centers (say 𝑆 βŠ† 𝑉 ) and a mapping of records to corresponding

centers, πœ‹ : 𝑉 β†’ 𝑆 , such that the maximum distance of any record

from its center, i.e., max𝑣𝑖 βˆˆπ‘‰ 𝑑 (𝑣𝑖 , πœ‹ (𝑣𝑖 )) is minimized.

We assume that the points 𝑣𝑖 ∈ 𝑉 exist in a metric space andthe distance between any pair of points is not known. We denotethe unknown distance between any pair of points (𝑣𝑖 , 𝑣 𝑗 ) where𝑣𝑖 , 𝑣 𝑗 ∈ 𝑉 as 𝑑 (𝑣𝑖 , 𝑣 𝑗 ) and use π‘˜ to denote the number of clusters.Optimal clusters are denoted asπΆβˆ— withπΆβˆ— (𝑣𝑖 ) βŠ† 𝑉 denoting the setof points belonging to the optimal cluster containing 𝑣𝑖 . Similarly,𝐢 (𝑣𝑖 ) βŠ† 𝑉 refers to the nodes belonging to the cluster containing𝑣𝑖 for any clustering given by 𝐢 (Β·).

In addition to the k-center clustering, we study single linkageand complete linkage–agglomerative clustering techniques wherethe distance metric over the records is not known apriori. Thesetechniques initialize each record 𝑣𝑖 in a separate singleton clusterand sequentially merge the pair of clusters having the least distancebetween them. In case of single linkage, the distance between twoclusters 𝐢1 and 𝐢2 is characterized by the closest pair of recordsdefined as:

𝑑𝑆𝐿 (𝐢1,𝐢2) = min𝑣𝑖 ∈𝐢1,𝑣𝑗 ∈𝐢2

𝑑 (𝑣𝑖 , 𝑣 𝑗 )

In complete linkage, the distance between a pair of clusters 𝐢1and 𝐢2 is calculated by identifying the farthest pair of records,𝑑𝐢𝐿 (𝐢1,𝐢2) = max𝑣𝑖 ∈𝐢1,𝑣𝑗 ∈𝐢2 𝑑 (𝑣𝑖 , 𝑣 𝑗 ) .

2.2 Noise Models

The oracle models discussed in Problem 2.2, 2.4 and 2.5 assume thatthe oracle answers every comparison query correctly. In real worldapplications, however, the answers can be wrong which can leadto noisy results. To formalize the notion of noise, we consider twodifferent models. First, adversarial noise model considers a setting

where a comparison query can be adversarially wrong if the twovalues being compared are within a multiplicative factor of (1 + `)for some constant ` > 0.

O(𝑣1, 𝑣2) =

Yes, if 𝑣1 < 1

(1+`) 𝑣2

No, if 𝑣1 > (1 + `)𝑣2adversarially incorrect if 1

(1+`) ≀𝑣1𝑣2≀ (1 + `)

The parameter ` corresponds to the degree of error. For example,` = 0 implies a perfect oracle. The model extends to quadrupletoracle as follows.

O(𝑣1, 𝑣2, 𝑣3, 𝑣4) =

Yes, if 𝑑 (𝑣1, 𝑣2) < 1

(1+`) 𝑑 (𝑣3, 𝑣4)No, if 𝑑 (𝑣1, 𝑣2) > (1 + `)𝑑 (𝑣3, 𝑣4)adversarially incorrect if 1

(1+`) ≀𝑑 (𝑣1,𝑣2)𝑑 (𝑣3,𝑣4) ≀ (1 + `)

The second model considers a probabilistic noise model whereeach comparison query is incorrect independently with a proba-bility 𝑝 < 1

2 and asking the same query multiple times yields thesame response. We discuss ways to estimate ` and 𝑝 from real datain Section 6.

3 FINDING MAXIMUM

In this section, we present robust algorithms to identify the recordcorresponding to the maximum value in 𝑉 under the adversarialnoise model and the probabilistic noise model. Later we extend thealgorithms to find the farthest and the nearest neighbor. We notethat our algorithms for the adversarial model are parameter free(do not depend on `) and the algorithms for the probabilistic modelcan use 𝑝 = 0.5 as a worst case estimate of the noise.

3.1 Adversarial Noise

Consider a trivial approach that maintains a running maximumvalue while sequentially processing the records, i.e., if a largervalue is encountered, the current maximum value is updated to thelarger value. This approach requires 𝑛 βˆ’ 1 comparisons. However,in the presence of adversarial noise, our output can have a signifi-cantly lower value compared to the correct maximum. In general,if π‘£π‘šπ‘Žπ‘₯ is the true maximum of 𝑉 , then the above approach canreturn an approximate maximum whose value could be as low asπ‘£π‘šπ‘Žπ‘₯/(1 + `)π‘›βˆ’1. To see this, assume 𝑣1 = 1, and 𝑣𝑖 = (1 + ` βˆ’ πœ–)𝑖where πœ– > 0 is very close to 0. It is possible that while comparing 𝑣𝑖and 𝑣𝑖+1, the oracle returns 𝑣𝑖 as the larger element. If this mistakeis repeated for every 𝑖 , then, 𝑣1 will be declared as the maximumelement whereas the correct answer is 𝑣𝑛 β‰ˆ 𝑣1 (1 + `)π‘›βˆ’1.

To improve upon this naive strategy, we introduce a naturalkeeping score based idea where given a set 𝑆 βŠ† 𝑉 of records, wemaintain Count(𝑣, 𝑆) that is equal to the number of values smallerthan 𝑣 in 𝑆 .

Count(𝑣, 𝑆) =βˆ‘οΈ

π‘₯ βˆˆπ‘†\{𝑣 }1{O(𝑣, π‘₯) == No}

It is easy to observe that when the oracle makes no mistakes,Count(𝑠max, 𝑆) = |𝑆 | βˆ’ 1 and obtains the highest score, where 𝑠maxis the maximum value in 𝑆 . Using this observation, in Algorithm 1,we output the value with the highest Count score.

How to Design Robust Algorithms using Noisy Comparison Oracle

Given a set of records 𝑉 , we show in Lemma 3.1 thatCount-Max(𝑉 ) obtained using Algorithm 1 always returns a goodapproximation of the maximum value in 𝑉 .

Lemma 3.1. Given a set of values 𝑉 with maximum value 𝑣max,

Count-Max(𝑉 ) returns a value 𝑒max where 𝑒max β‰₯ 𝑣max/(1 + `)2using 𝑂 ( |𝑉 |2) oracle queries.

Using Example 3.2, when ` = 1, we demonstrate that (1 + `)2 = 4approximation ratio is achieved by Algorithm 1.

Example 3.2. Let 𝑆 denote a set of four records 𝑒, 𝑣,𝑀 and 𝑑 with

ground truth values 51, 101, 102 and 202, respectively. While iden-

tifying the maximum value under adversarial noise with ` = 1, theoracle must return a correct answer to O(𝑒, 𝑑) and all other oracle

query answers can be incorrect adversarially. If the oracle answers all

other queries incorrectly, we have, Count values of 𝑑,𝑀,𝑒, 𝑣 are 1, 1, 2,and 2 respectively. Therefore, 𝑒 and 𝑣 are equally likely, and when

Algorithm 1 returns 𝑒, we have a 202/51 β‰ˆ 3.96 approximation.

From Lemma 3.1, we have that 𝑂 (𝑛2) oracle queries where |𝑉 | = 𝑛,are required to get (1 + `)2 approximation. In order to improve thequery complexity, we use a tournament to obtain the maximumvalue. The idea of using a tournament for finding maximum hasbeen studied in the past [15, 18].

Algorithm 2 presents pseudo code of the approach that takesvalues 𝑉 as input and outputs an approximate maximum value. Itconstructs a balanced _-ary tree T containing 𝑛 leaf nodes suchthat a random permutation of the values𝑉 is assigned to the leavesof T . In a tournament, the internal nodes of T are processed bottom-up such that at every internal node𝑀 , we assign the value that islargest among the children of𝑀 . To identify the largest value, wecalculate argmaxπ‘£βˆˆchildren(𝑀) Count(𝑣, children(𝑀)) at the internalnode𝑀 , where Count(𝑣, 𝑋 ) refers to the number of elements in 𝑋

that are considered smaller than 𝑣 . Finally, we return the value at theroot of T as our output. In Lemma 3.3, we show that Algorithm 2returns a value that is a (1 + `)2 log_ 𝑛 multiplicative approximationof the maximum value.

Algorithm 1 Count-Max(S) : finds the max. value by counting1: Input : A set of values 𝑆2: Output : An approximate maximum value of 𝑆3: for 𝑣 ∈ 𝑆 do

4: Calculate Count(𝑣, 𝑆)5: 𝑒max ← arg maxπ‘£βˆˆπ‘†Count(𝑣, 𝑆)6: return 𝑒max

Lemma 3.3. Suppose π‘£π‘šπ‘Žπ‘₯ is the maximum value among the set

of records 𝑉 . Algorithm 2 outputs a value π‘’π‘šπ‘Žπ‘₯ such that π‘’π‘šπ‘Žπ‘₯ β‰₯π‘£π‘šπ‘Žπ‘₯

(1+`)2 log_ 𝑛 using 𝑂 (𝑛_) oracle queries.

According to Lemma 3.3, Algorithm 2 identifies a constant ap-proximation when _ = Θ(𝑛), ` is a fixed constant and has a querycomplexity of Θ(𝑛2). By reducing the degree of the tournamenttree from _ to 2, we can achieve Θ(𝑛) query complexity, but with aworse approximation ratio of (1 + `)log𝑛 .

Now, we describe our main algorithm (Algorithm 4) that uses thethe following observation to improve the overall query complexity.

Observation 3.4. At an internal node 𝑀 ∈ T , the identified

maximum is incorrect only if there exists π‘₯ ∈ children(𝑀) that is veryclose to the true maximum (sayπ‘€π‘šπ‘Žπ‘₯ ), i.e.

𝑀max(1+`) ≀ π‘₯ ≀ (1+ `)𝑀max.

Based on the above observation, our AlgorithmMax-Adv usestwo steps to identify a good approximation of 𝑣max. Consider thecase when there are a lot of values that are close to 𝑣max. In Algo-rithm Max-Adv, we use a subset 𝑉 βŠ† 𝑉 of size

βˆšπ‘›π‘‘ (for a suitable

choice of parameter 𝑑 ) obtained using uniform sampling with re-placement. We show that using a sufficiently large subset 𝑉 , ob-tained by sampling, we ensure that at least one value that is closerto 𝑣max is in 𝑉 , thereby giving a good approximation of 𝑣max.

Algorithm 2 Tournament : finds the maximum value using atournament tree1: Input : Set of values 𝑉 , Degree _2: Output : An approximate maximum value 𝑒max3: Construct a balanced _-ary tree T with |𝑉 | nodes as leaves.4: Let πœ‹π‘‰ be a random permutation of 𝑉 assigned to leaves of T5: for 𝑖 = 1 to log_ |𝑉 | do6: for internal node𝑀 at level log_ |𝑉 | βˆ’ 𝑖 do7: Letπ‘ˆ denote the children of𝑀 .8: Set the internal node𝑀 to Count-Max(π‘ˆ )9: 𝑒max ←value at root of T10: return 𝑒max

In order to handle the case when there are only a few valuescloser to 𝑣max, we divide the entire data set into 𝑙 disjoint parts (fora suitable choice of 𝑙) and run the Tournament algorithm withdegree _ = 2 on each of these parts separately (Algorithm 3). Asthere are very few points close to 𝑣max, the probability of comparingany such value with 𝑣max is small, and this ensures that in thepartition containing 𝑣max, Tournament returns 𝑣max. We collectthe maximum values returned by Algorithm 2 from all the partitionsand include these values in 𝑇 in AlgorithmMax-Adv. We repeatthis procedure 𝑑 times and set 𝑙 =

βˆšπ‘›, 𝑑 = 2 log(2/𝛿) to achieve the

desired success probability 1 βˆ’ 𝛿 . We combine the outputs of boththe steps, i.e.,𝑉 and𝑇 and output the maximum among them usingCount-Max. This ensures that we get a good approximation as weuse the best of both the approaches.

Algorithm 3 Tournament-Partition1: Input : Set of values 𝑉 , number of partitions 𝑙2: Output : A set of maximum values from each partition3: Randomly partition 𝑉 into 𝑙 equal parts 𝑉1,𝑉2, Β· Β· ·𝑉𝑙4: for 𝑖 = 1 to 𝑙 do5: 𝑝𝑖 ← Tournament(𝑉𝑖 , 2)6: 𝑇 ← 𝑇 βˆͺ {𝑝𝑖 }7: return 𝑇

Theoretical Guarantees. In order to prove approximation guar-antee of Algorithm 4, we first argue that the sample 𝑉 contains agood approximation of the maximum value 𝑣max with a high prob-ability. Let 𝐢 denote the set of values that are very close to 𝑣max.Suppose 𝐢 = {𝑒 : 𝑣max/(1 + `) ≀ 𝑒 ≀ 𝑣max}. In Lemma 3.5, we first

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

Algorithm 4Max-Adv : Maximum with Adversarial Noise1: Input : Set of values 𝑉 , number of iterations 𝑑 , partitions 𝑙2: Output : An approximate maximum value 𝑒max3: 𝑖 ← 1,𝑇 ← πœ™

4: Let𝑉 denote a sample of sizeβˆšπ‘›π‘‘ selected uniformly at random

(with replacement) from 𝑉 .5: for 𝑖 ≀ 𝑑 do

6: 𝑇𝑖 ← Tournament-Partition(𝑉 , 𝑙)7: 𝑇 ← 𝑇 βˆͺ𝑇𝑖8: 𝑒max ← Count-Max(𝑉 βˆͺ𝑇 )9: return 𝑒max

show that 𝑉 contains a value 𝑣 𝑗 ∈ 𝑉 such that 𝑣 𝑗 β‰₯ 𝑣max/(1 + `),whenever the size of 𝐢 is large, i.e., |𝐢 | >

βˆšπ‘›/2. Otherwise, we

show that we can recover 𝑣max correctly with probability 1 βˆ’ 𝛿/2whenever |𝐢 | ≀

βˆšπ‘›/2.

Lemma 3.5. (1) If |𝐢 | >βˆšπ‘›/2, then there exists a value 𝑣 𝑗 ∈ 𝑉

satisfying 𝑣 𝑗 β‰₯ 𝑣max/(1 + `) with probability of 1 βˆ’ 𝛿/2.(2) Suppose |𝐢 | ≀

βˆšπ‘›/2. Then, 𝑇 contains 𝑣max with probability

at least 1 βˆ’ 𝛿/2.

Now, we briefly provide a sketch of the proof of Lemma 3.5.Consider the first step, where we use a uniformly random sample𝑉 of

βˆšπ‘›π‘‘ points from 𝑉 (obtained with replacement). When |𝐢 | β‰₯√

𝑛/2, probability that 𝑉 contains a value from 𝐢 is given by 1 βˆ’

(1 βˆ’ |𝐢 |/𝑛) |𝑉 | = 1 βˆ’(1 βˆ’ 1

2βˆšπ‘›

)2βˆšπ‘› log(2/𝛿)β‰ˆ 1 βˆ’ 𝛿/2.

In the second step, Algorithm 4 uses a modified tournamenttree that partitions the set 𝑉 into 𝑙 =

βˆšπ‘› parts of size 𝑛/𝑙 =

βˆšπ‘›

each and identifies a maximum 𝑝𝑖 from each partition 𝑉𝑖 usingAlgorithm 2. We have that the expected number of elements from𝐢 in a partition 𝑉𝑖 containing 𝑣max is |𝐢 |/𝑙 =

βˆšπ‘›/(2βˆšπ‘›) = 1/2.

Thus by the Markov’s inequality, the probability that 𝑉𝑖 containsa value from 𝐢 is ≀ 1/2. With 1/2 probability, 𝑣max will never becompared with any point from𝐢 in the partition𝑉𝑖 . To increase thesuccess probability, we run this procedure 𝑑 times and obtain all theoutputs. Among the 𝑑 runs of Algorithm 2, we argue that 𝑣max isnever compared with any value of𝐢 in at least one of the iterationswith a probability at least 1 βˆ’ (1 βˆ’ 1/2)2 log(2/𝛿) β‰₯ 1 βˆ’ 𝛿/2.

In Lemma 3.1, we show that using Count-Max we get a (1+ `)2multiplicative approximation. Combining it with Lemma 3.5, wehave that 𝑒max returned by Algorithm 4 satisfies 𝑒max β‰₯ 𝑣max

(1+`)3with probability 1βˆ’ 𝛿 . For query complexity, Algorithm 3 identifiesβˆšπ‘›π‘‘ samples denoted by 𝑉 . These identified values, along with 𝑇

are then processed by Count-Max to identify the maximum π‘’π‘šπ‘Žπ‘₯ .This step requires 𝑂 ( |𝑉 βˆͺ𝑇 |2) = 𝑂 (𝑛 log2 (1/𝛿)) oracle queries.

Theorem 3.6. Given a set of values 𝑉 , Algorithm 4 returns a

(1 + `)3 approximation of maximum value with probability 1 βˆ’ 𝛿using 𝑂 (𝑛 log2 (1/𝛿)) oracle queries.

3.2 Probabilistic Noise

We cannot directly extend the algorithms for the adversarial noisemodel to probabilistic noise. Specifically, the theoretical guaranteesof Lemma 3.3 do not apply when the noise is probabilistic. In thissection, we develop several new ideas to handle probabilistic noise.

𝑠

51𝑒

50𝑣

1𝑀

100𝑑

Figure 2: Example for Lemma 3.1 with ` = 1.

Let rank(𝑒,𝑉 ) denote the index of𝑒 in the non-increasing sortedorder of values in𝑉 . So, π‘£π‘šπ‘Žπ‘₯ will have rank 1 and so on. Our mainidea is to use an early stopping approach that uses a sample 𝑆 βŠ† 𝑉of𝑂 (log(𝑛/𝛿)) values selected randomly and for every value 𝑒 thatis not in 𝑆 , we calculate Count(𝑒, 𝑆) and discard 𝑒 using a chosenthreshold for the Count scores. We argue that by doing so, it helpsus eliminate the values that are far away from the maximum in thesorted ranking. This process is continued Θ(log𝑛) times to identifythe maximum value. We present the pseudo code in the Appendixand prove the following approximation guarantee.

Theorem 3.7. There is an algorithm that returns 𝑒max ∈ 𝑉 such

that rank(𝑒max,𝑉 ) = 𝑂 (log2 (𝑛/𝛿)) with probability 1 βˆ’ 𝛿 and re-

quires 𝑂 (𝑛 log2 (𝑛/𝛿)) oracle queries.

The algorithm to identify the minimum value is same as that ofmaximumwith amodificationwhere Count scores consider the caseof Yes (instead of No): Count(𝑣, 𝑆) = βˆ‘

π‘₯ βˆˆπ‘†\{𝑣 } 1{O(𝑣, π‘₯) == Yes}

3.3 Extension to Farthest and Nearest Neighbor

Given a set of records 𝑉 , the farthest record from a query 𝑒 corre-sponds to the record 𝑒 β€² ∈ 𝑉 such that 𝑑 (𝑒,𝑒 β€²) is maximum. Thisquery is equivalent to finding maximum in the set of distance val-ues given by 𝐷 (𝑒) = {𝑑 (𝑒,𝑒 β€²) | βˆ€π‘’ β€² ∈ 𝑉 } containing 𝑛 valuesfor which we already developed algorithms in Section 3. Since theground truth distance between any pair of records is not known, werequire quadruplet oracle (instead of comparison oracle) to identifythe maximum element in 𝐷 (𝑒). Similarly, the nearest neighbor ofquery record 𝑒 corresponds to finding the record with minimumdistance value in 𝐷 (𝑒). Algorithms for finding maximum fromprevious sections, extend for these settings with similar guarantees.

Example 3.8. Figure 2 shows a worst-case example for the ap-

proximation guarantee to identify the farthest point from 𝑠 (with

` = 1). Similar to Example 3.2, we have, Count values of 𝑑,𝑀,𝑒, 𝑣 are

1, 1, 2, 2 respectively. Therefore, 𝑒 and 𝑣 are equally likely, and when

Algorithm 1 outputs 𝑒, we have a β‰ˆ 3.96 approximation.

For probabilistic noise, the farthest identified in Section 3.2 isguaranteed to rank within the top-𝑂 (log2 𝑛) values of set 𝑉 (Theo-rem 3.7). In this section, we show that it is possible to compute thefarthest point within a small additive error under the probabilisticmodel, if the data set satisfies an additional property discussed be-low. For the simplicity of exposition, we assume 𝑝 ≀ 0.40, thoughour algorithms work for any value of 𝑝 < 0.5 (with different con-stants).

One of the challenges in developing robust algorithms for far-thest identification is that every relative distance comparison ofrecords from 𝑒 (O(𝑒, 𝑣𝑖 , 𝑒, 𝑣 𝑗 ) for some 𝑣𝑖 , 𝑣 𝑗 ∈ 𝑉 ) may be an-swered incorrectly with constant error probability 𝑝 and the suc-cess probability cannot be boosted by repetition. We overcomethis challenge by performing pairwise comparisons in a robustmanner. Suppose the desired failure probability is 𝛿 , we observethat if Θ(log(1/𝛿)) records closest to the query 𝑒 are known (say

How to Design Robust Algorithms using Noisy Comparison Oracle

𝑆

𝑣 𝑗𝑣𝑖

𝑒𝛼

2𝛼

In this example, O(𝑒, 𝑣𝑖 , 𝑒, 𝑣 𝑗 ) is an-swered correctly with a probability1βˆ’ 𝑝 . To boost the correctness prob-ability, FCount uses the queriesO(π‘₯, 𝑣𝑖 , π‘₯, 𝑣 𝑗 ), βˆ€π‘₯ in the red regionaround 𝑒, denoted by 𝑆 .

Figure 3: Algorithm 5 returns β€˜Yes’ as 𝑑 (𝑒, 𝑣𝑖 ) < 𝑑 (𝑒, 𝑣 𝑗 ) βˆ’ 2𝛼 .

𝑆) and maxπ‘₯ βˆˆπ‘† {𝑑 (𝑒, π‘₯)} ≀ 𝛼 for some 𝛼 > 0, then each pairwisecomparison of the form O(𝑒, 𝑣𝑖 , 𝑒, 𝑣 𝑗 ) can be replaced by Algo-rithm PairwiseComp and use it to execute Algorithm 4. Algorithm 5takes the two records 𝑣𝑖 and 𝑣 𝑗 as input along with 𝑆 and outputsYes or No where Yes denotes that 𝑣𝑖 is closer to 𝑒. We calculateFCount(𝑣𝑖 , 𝑣 𝑗 ) =

βˆ‘π‘₯ βˆˆπ‘† 1{O(𝑣𝑖 , π‘₯, 𝑣 𝑗 , π‘₯) == Yes} as a robust esti-

mate where the oracle considers 𝑣𝑖 to be closer to π‘₯ than 𝑣 𝑗 . IfFCount(𝑣𝑖 , 𝑣 𝑗 ) is smaller than 0.3|𝑆 | ≀ (1 βˆ’ 𝑝) |𝑆 |/2 then we outputNo and Yes otherwise. Therefore, every pairwise comparison queryis replaced with Θ(log(1/𝛿)) quadruplet queries using Algorithm 5.

We argue that Algorithm 5 will output the correct answer with ahigh probability if |𝑑 (𝑒, 𝑣 𝑗 )βˆ’π‘‘ (𝑒, 𝑣𝑖 ) | β‰₯ 2𝛼 (See Fig 3). In Lemma 3.9,we show that, if 𝑑 (𝑒, 𝑣 𝑗 ) > 𝑑 (𝑒, 𝑣𝑖 ) + 2𝛼 , then, FCount(𝑣𝑖 , 𝑣 𝑗 ) β‰₯0.3|𝑆 | with probability 1 βˆ’ 𝛿 .

Lemma 3.9. Suppose max𝑣𝑖 βˆˆπ‘† 𝑑 (𝑒, 𝑣𝑖 ) ≀ 𝛼 and |𝑆 | β‰₯ 6 log(1/𝛿).Consider two records 𝑣𝑖 and 𝑣 𝑗 such that 𝑑 (𝑒, 𝑣𝑖 ) < 𝑑 (𝑒, 𝑣 𝑗 ) βˆ’2𝛼 then

FCount(𝑣𝑖 , 𝑣 𝑗 ) β‰₯ 0.3|𝑆 | with a probability of 1 βˆ’ 𝛿 .

With the help of Algorithm 5, relative distance query of any pairof records 𝑣𝑖 , 𝑣 𝑗 from 𝑒 can be answered correctly with a high prob-ability provided |𝑑 (𝑒, 𝑣𝑖 ) βˆ’ 𝑑 (𝑒, 𝑣 𝑗 ) | β‰₯ 2𝛼 . Therefore, the output ofAlgorithm 5 is equivalent to an additive adversarial error modelwhere any quadruplet query can be adversarially incorrect if thedistance |𝑑 (𝑒, 𝑣𝑖 ) βˆ’ 𝑑 (𝑒, 𝑣 𝑗 ) | < 2𝛼 and correct otherwise. In the Ap-pendix, we show that Algorithm 4 can be extended to the additiveadversarial error model, such that each comparison (𝑒, 𝑣𝑖 , 𝑒, 𝑣 𝑗 ) isreplaced by PairwiseComp (Algorithm 5). We give an approxima-tion guarantee, that loses an additive 6𝛼 following a similar analysisof Theorem 3.6.

Algorithm 5 PairwiseComp (𝑒, 𝑣𝑖 , 𝑣𝑗 , 𝑆)

1: Calculate FCount(𝑣𝑖 , 𝑣𝑗 ) =βˆ‘

π‘₯βˆˆπ‘† 1{O(π‘₯, 𝑣𝑖 , π‘₯, 𝑣𝑗 ) == Yes}2: if FCount(𝑣𝑖 , 𝑣𝑗 ) < 0.3 |𝑆 | then3: return No4: else return Yes

Theorem 3.10. Given a query vertex 𝑒 and a set 𝑆 with |𝑆 | =Ξ©(log(𝑛/𝛿)) such that maxπ‘£βˆˆπ‘† 𝑑 (𝑒, 𝑣) ≀ 𝛼 then the farthest iden-

tified using Algorithm 4 (with PairwiseComp), denoted by 𝑒max is

within 6𝛼 distance from the optimal farthest point, i.e., 𝑑 (𝑒,𝑒max) β‰₯maxπ‘£βˆˆπ‘‰ 𝑑 (𝑒, 𝑣) βˆ’ 6𝛼 with a probability of 1 βˆ’ 𝛿 . Further the querycomplexity is 𝑂 (𝑛 log3 (𝑛/𝛿)).

4 π‘˜-CENTER CLUSTERING

In this section, we present algorithms for π‘˜-center clustering andprove constant approximation guarantees of our algorithm. Our

algorithm is an adaptation of the classical greedy algorithm forπ‘˜-center [27]. The greedy algorithm [27] is initialized with an arbi-trary point as the first cluster center and then iteratively identifiesthe next centers. In each iteration, it assigns all the points to thecurrent set of clusters, by identifying the closest center for eachpoint. Then, it finds the farthest point among the clusters and usesit as the new center. This technique requires 𝑂 (π‘›π‘˜) distance com-parisons in the absence of noise and guarantees 2-approximationof the optimal clustering objective. We provide the pseudocode forthis approach in Algorithm 6. Using an argument similar to the onepresented for the worst case example in Section 3, we can show thatif we use Algorithm 6 where we replace every comparison with anoracle query, the generated clusters can be arbitrarily worse evenfor small error. In order to improve its robustness, we devise newalgorithms to perform assignment of points to respective clustersand farthest point identification. Missing Details from this sectionare discussed in Appendix 10 and 11.

Algorithm 6 Greedy Algorithm1: Input : Set of points 𝑉2: Output : Clusters C3: 𝑠1 ← arbitrary point from 𝑉 , 𝑆 = {𝑠1}, 𝐢 = {{𝑉 }}.4: for 𝑖 = 2 to π‘˜ do

5: 𝑠𝑖 ← Approx-Farthest(𝑆,𝐢)6: 𝑆 ← 𝑆 βˆͺ {𝑠𝑖 }7: C ← Assign(𝑆)8: return C

4.1 Adversarial Noise

Now, we describe the two steps (Approx-Farthest and Assign) ofthe Greedy Algorithm that will complete the description of Algo-rithm 6. To do so, we build upon the results from previous sectionthat give algorithms for obtaining maximum/farthest point.Approx-Farthest. Given a clustering C, and a set of centers 𝑆 ,we construct the pairs (𝑣𝑖 , 𝑠 𝑗 ) where 𝑣𝑖 is assigned to cluster 𝐢 (𝑠 𝑗 )centered at 𝑠 𝑗 ∈ 𝑆 . Using Algorithm 4, we identify the point, cen-ter pair that have the maximum distance i.e. argmax𝑣𝑖 βˆˆπ‘‰ 𝑑 (𝑣𝑖 , 𝑠 𝑗 ),which corresponds to the farthest point. For the parameters, weuse 𝑙 =

βˆšπ‘›, 𝑑 = log(2π‘˜/𝛿) and number of samples 𝑉 =

βˆšπ‘›π‘‘ .

Assign. After identifying the farthest point, we reassign all thepoints to the centers (now including the farthest point as the newcenter) closest to them. We calculate a movement score calledMCount for every point with respect to each center. MCount(𝑒, 𝑠 𝑗 ) =βˆ‘π‘ π‘˜ βˆˆπ‘†\{𝑠 𝑗 } 1{O((𝑠 𝑗 , 𝑒), (π‘ π‘˜ , 𝑒)) == Yes}, for any record 𝑒 ∈ 𝑉 and

𝑠 𝑗 ∈ 𝑆 . This step is similar to Count-Max Algorithm. We assignthe point 𝑒 to the center with the highest MCount value.

Example 4.1. Suppose we run k-center algorithm with π‘˜ = 2 and` = 1 on the points in Example 3.8. The optimal centers are 𝑒 and

𝑑 with radius 51. On running our algorithm, suppose 𝑀 is chosen

as the first center and Approx-Farthest calculates Count values

similar to Example 3.2. We have, Count values of 𝑠, 𝑑, 𝑒, 𝑣 are 1, 2, 3, 0respectively. Therefore, our algorithm identifies 𝑒 as the second center,

achieving 3-approximation.

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

Theoretical Guarantees. We now prove the approximation guar-antee obtained by Algorithm 6.

In each iteration, we show that Assign reassigns each point toa center with distance approximately similar to the distance fromthe closest center. This is surprising given that we only use MCountscores for assignment. Similarly, we show that Approx-Farthest(Algorithm 4) identifies a close approximation to the true farthestpoint. Concretely, we show that every point is assigned to a centerwhich is a (1 + `)2 approximation; Algorithm 4 identifies farthestpoint𝑀 which is a (1 + `)5 approximation.

In every iteration of the Greedy algorithm, if we identify an 𝛼-approximation of the farthest point, and a 𝛽-approximation whenreassigning the points, then, we show that the clusters output are a2𝛼𝛽2-approximation to the π‘˜-center objective. For complete details,please refer Appendix 10. Combining all the claims, for a givenerror parameter `, we obtain:

Theorem 4.2. For ` < 118 , Algorithm 6 achieves a (2 + 𝑂 (`))-

approximation for the π‘˜-center objective using𝑂 (π‘›π‘˜2+π‘›π‘˜ Β·log2 (π‘˜/𝛿))oracle queries with probability 1 βˆ’ 𝛿 .

4.2 Probabilistic Noise

For probabilistic noise, each query can be incorrect with probability𝑝 and therefore, Algorithm 6 may lead to poor approximation guar-antees. Here, we build upon the results from section 3.3 and provideApprox-Farthest and Assign algorithms. We denote the size ofminimum cluster among optimum clusters πΆβˆ— to beπ‘š, and totalfailure probability of our algorithms to be 𝛿 . We assume 𝑝 ≀ 0.40,a constant strictly less than 1

2 . Let 𝛾 = 450 be a large constant usedin our algorithms which obtains the claimed guarantees.Overview. Algorithm 7 presents the pseudo-code of our algorithmthat operates in two phases. In the first phase (lines 3-12), we sam-ple each point with a probability 𝛾 log(𝑛/𝛿)/π‘š to identify a smallsample of β‰ˆ 𝛾𝑛 log(𝑛/𝛿)

π‘š points (denoted by 𝑉 ) and use Algorithm 7to identify π‘˜ centers iteratively. In this process, we also identifya core for each cluster (denoted by 𝑅). Formally, core is definedas a set of Θ(log(𝑛/𝛿)) points that are very close to the centerwith high probability. The cores are then used in the second phase(line 15) for the assignment of remaining points. Now, we describe

Algorithm 7 Greedy Clustering1: Input : Set of points 𝑉 , smallest cluster sizeπ‘š.2: Output : Clusters 𝐢3: For every 𝑒 ∈ 𝑉 , include 𝑒 in 𝑉 with probability 𝛾 log(𝑛/𝛿)

π‘š

4: 𝑠1 ← select an arbitrary point from 𝑉 , 𝑆 ← {𝑠1}5: 𝐢 (𝑠1) ← 𝑉

6: 𝑅(𝑠1) ← Identify-Core(𝐢 (𝑠1), 𝑠1)7: for 𝑖 = 2 to π‘˜ do

8: 𝑠𝑖 ← Approx-Farthest(𝑆,𝐢)9: 𝐢, 𝑅 ← Assign(𝑆, 𝑠𝑖 , 𝑅)10: 𝑆 ← 𝑆 βˆͺ {𝑠𝑖 }11: 𝐢 ← Assign-Final(𝑆, 𝑅,𝑉 \𝑉 )12: return 𝐢

the main challenge in extending Approx-Farthest and Assignideas of Algorithm 6. Given a cluster 𝐢 containing the center 𝑠𝑖 ,

when we find the Approx-Farthest, the ideas from Section 3.2give a 𝑂 (log2 𝑛) rank approximation. As shown in section 3.3, wecan improve the approximation guarantee by considering a set ofΘ(log(𝑛/𝛿)) points closest to 𝑠𝑖 , denoted by 𝑅(𝑠𝑖 ) and call themcore of 𝑠𝑖 . We argue that such an assumption of set 𝑅 is justified.For example, consider the case when clusters are of size Θ(𝑛) andsampling π‘˜ log(𝑛/𝛿) points gives us log(𝑛/𝛿) points from each op-timum cluster; which means that there are log(𝑛/𝛿) points withina distance of 2OPT from every sampled point where OPT refers tothe optimum π‘˜-center objective.Assign. Consider a point 𝑠𝑖 such that we have to assign points toform the cluster 𝐢 (𝑠𝑖 ) centered at 𝑠𝑖 . We calculate an assignment

score (called ACount in line 4) for every point 𝑒 of a cluster 𝐢 (𝑠 𝑗 ) \𝑅(𝑠 𝑗 ) centered at 𝑠 𝑗 . ACount captures the total number of times 𝑒is considered to belong to same cluster as that of π‘₯ for each π‘₯ in thecore 𝑅(𝑠 𝑗 ). Intuitively, points that belong to same cluster as that of𝑠𝑖 are expected to have higher ACount score. Based on the scores,we move 𝑒 to 𝐢 (𝑠𝑖 ) or keep it in 𝐢 (𝑠 𝑗 ).

Algorithm 8 Assign(𝑆, 𝑠𝑖 , 𝑅)1: 𝐢 (𝑠𝑖 ) ← {𝑠𝑖 }2: for 𝑠 𝑗 ∈ 𝑆 do

3: for 𝑒 ∈ 𝐢 (𝑠 𝑗 ) \ 𝑅 (𝑠 𝑗 ) do4: ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) =

βˆ‘π‘£π‘˜ βˆˆπ‘… (𝑠 𝑗 ) 1{O(𝑒, 𝑠𝑖 ,𝑒, π‘£π‘˜ ) == Yes}

5: if ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) > 0.3 |𝑅 (𝑠 𝑗 ) | then6: 𝐢 (𝑠𝑖 ) ← 𝐢 (𝑠𝑖 ) βˆͺ {𝑒 };𝐢 (𝑠 𝑗 ) ← 𝐢 (𝑠 𝑗 ) \ {𝑒 }7: 𝑅 (𝑠𝑖 ) ← Identify-Core(𝐢 (𝑠𝑖 ), 𝑠𝑖 )8: return C, R

Algorithm 9 Identify-Core(𝐢 (𝑠𝑖 ), 𝑠𝑖 )1: for 𝑒 ∈ 𝐢 (𝑠𝑖 ) do2: Count(u)=

βˆ‘π‘₯∈𝐢 (𝑠𝑖 ) 1{O(𝑠𝑖 , π‘₯, 𝑠𝑖 ,𝑒) == No}

3: 𝑅 (𝑠𝑖 ) denote set of 8𝛾 log(𝑛/𝛿)/9 points with the highest Count values.4: return 𝑅 (𝑠𝑖 )

Identify-Core. After forming cluster 𝐢 (𝑠𝑖 ), we identify the coreof 𝑠𝑖 . For this, we calculate a score, denoted by Count and capturesnumber of times it is closer to 𝑠𝑖 compared to other points in𝐢 (𝑆𝑖 ).Intuitively, we expect points with high values of Count to belong toπΆβˆ— (𝑠𝑖 ) i.e., optimum cluster containing 𝑠𝑖 . Therefore we sort theseCount scores and return the highest scored points.Approx-Farthest. For a set of clusters C, and a set of centers 𝑆 ,we construct the pairs (𝑣𝑖 , 𝑠 𝑗 ) where 𝑣𝑖 is assigned to cluster 𝐢 (𝑠 𝑗 )centered at 𝑠 𝑗 ∈ 𝑆 and each center 𝑠 𝑗 ∈ 𝑆 has a corresponding core𝑅(𝑠 𝑗 ). The farthest point can be found by finding the maximumdistance (point, center) pair among all the points considered. To doso, we use the ideas developed in section 3.3.

We leverage ClusterComp (Algorithm 10) to compare the dis-tance of two points, say 𝑣𝑖 , 𝑣 𝑗 from their respective centers 𝑠𝑖 , 𝑠 𝑗 .ClusterComp gives a robust answer to a pairwise comparisonquery to the oracle O(𝑣𝑖 , 𝑠𝑖 , 𝑣 𝑗 , 𝑠 𝑗 ) using the cores 𝑅(𝑠𝑖 ) and 𝑅(𝑠 𝑗 ).ClusterComp can be used as a pairwise comparison subroutine inplace of PairwiseComp for the algorithm in Section 3 to calculatethe farthest point. For every 𝑠𝑖 ∈ 𝑆 , let 𝑅(𝑠𝑖 ) denote an arbitraryset of

βˆšοΈπ‘…(𝑠𝑖 ) points from 𝑅(𝑠𝑖 ). For a ClusterComp comparison

How to Design Robust Algorithms using Noisy Comparison Oracle

query between the pairs (𝑣𝑖 , 𝑠𝑖 ) and (𝑣 𝑗 , 𝑠 𝑗 ), we use these subsetsin Algorithm 10 to ensure that we only make Θ(log(𝑛/𝛿)) oraclequeries for every comparison. However, when the query is betweenpoints of the same cluster, say 𝐢 (𝑠𝑖 ), we use all the Θ(log(𝑛/𝛿))points from 𝑅(𝑠𝑖 ). For the parameters used to find maximum usingAlgorithm 4, we use 𝑙 =

βˆšπ‘›, 𝑑 = log(𝑛/𝛿).

Example 4.3. Suppose we run π‘˜-center Algorithm 7 with π‘˜ = 2 andπ‘š = 2 on the points in Example 3.8. Let𝑀 denote the first center chosen

and Algorithm 7 identifies the core 𝑅(𝑀) by calculating Count values.If O(𝑒,𝑀, 𝑠,𝑀) and O(𝑠,𝑀, 𝑑,𝑀) are answered incorrectly (with prob-

ability 𝑝), we obtain Count values of 𝑣, 𝑠,𝑒, 𝑑 as 3, 2, 1, 0 respectively;and 𝑣 is added to 𝑅(𝑀). We identify the second center𝑒 by calculating

FCount for 𝑠,𝑒 and 𝑑 (See Fig. 3). After assigning (using Assign), the

clusters identified are {𝑀, 𝑣}, {𝑒, 𝑠, 𝑑}, achieving 3-approximation.

Algorithm 10 ClusterComp (𝑣𝑖 , 𝑠𝑖 , 𝑣𝑗 , 𝑠 𝑗 )

1: comparisons← 0, FCount(𝑣𝑖 , 𝑣𝑗 ) ← 02: if 𝑠𝑖 = 𝑠 𝑗 then

3: Let FCount(𝑣𝑖 , 𝑣𝑗 ) =βˆ‘

π‘₯βˆˆπ‘… (𝑠𝑖 ) 1{O(𝑣𝑖 , π‘₯, 𝑣𝑗 , π‘₯) == Yes}4: comparisons← |𝑅 (𝑠𝑖 ) |5: else Let FCount(𝑣𝑖 , 𝑣𝑗 ) =

βˆ‘π‘₯βˆˆπ‘… (𝑠𝑖 ),π‘¦βˆˆπ‘… (𝑠 𝑗 ) 1{O(𝑣𝑖 , π‘₯, 𝑣𝑗 , 𝑦) == Yes}

6: comparisons← |𝑅 (𝑠𝑖 ) | Β· |𝑅 (𝑠 𝑗 ) |7: if FCount(𝑣𝑖 , 𝑣𝑗 ) < 0.3 Β· comparisons then

8: return No9: else return Yes

Assign-Final. After obtaining π‘˜ clusters on the set of sampledpoints 𝑉 , we assign the remaining points using ACount scores,similar to the one described in Assign. For every point that isnot sampled, we first assign it to 𝑠1 ∈ 𝑆 , and if ACount(𝑒, 𝑠2, 𝑠1) β‰₯0.3|𝑅(𝑠1) |, we re-assign it to 𝑠2, and continue this process iteratively.After assigning all the points, the clusters are returned as output.

Theoretical Guarantees

Our algorithm first constructs a sample𝑉 βŠ† 𝑉 and runs the greedyalgorithm on this sampled set of points. Our main idea to ensurethat good approximation of the π‘˜-center objective lies in identifyinga good core around each center. Using a sampling probability of𝛾 log(𝑛/𝛿)/π‘š ensures that we have at leastΘ(log(𝑛/𝛿)) points fromeach of the optimal clusters in our sampled set 𝑉 . By finding theclosest points using Count scores, we identify 𝑂 (log(𝑛/𝛿)) pointsaround every center that are in the optimal cluster. Essentially,this forms the core of each cluster. These cores are then used forrobust pairwise comparison queries (similar to Section 3.3), in ourApprox-Farthest and Assign subroutines. We give the followingtheorem, which guarantees a constant, i.e., 𝑂 (1) approximationwith high probability.

Theorem 4.4. Given 𝑝 ≀ 0.4, a failure probability 𝛿 , and π‘š =

Ξ©(log3 (𝑛/𝛿)/𝛿). Then, Algorithm 7 achieves a 𝑂 (1)-approximation

for the π‘˜-center objective using 𝑂 (π‘›π‘˜ log(𝑛/𝛿) + 𝑛2

π‘š2 π‘˜ log2 (𝑛/𝛿)) or-acle queries with probability 1 βˆ’π‘‚ (𝛿).

5 HIERARCHICAL CLUSTERING

In this section, we present robust algorithms for agglomerativehierarchical clustering using single linkage and complete linkage

objectives. The naive algorithms initialize every record as a single-ton cluster and merge the closest pair of clusters iteratively. Fora set of clusters C = {𝐢1, . . . ,𝐢𝑑 }, the distance between any pairof clusters 𝐢𝑖 and 𝐢 𝑗 , for single linkage clustering, is defined asthe minimum distance between any pair of records in the clusters,𝑑𝑆𝐿 (𝐢1,𝐢2) = min𝑣1∈𝐢1,𝑣2∈𝐢2 𝑑 (𝑣1, 𝑣2). For complete linkage, clus-ter distance is defined as the maximum distance between any pairof records. All algorithms discussed in this section can be easilyextended for complete linkage, and therefore we study single link-age clustering. The main challenge in implementing single linkageclustering in the presence of adversarial noise is identification ofminimum value in a list of at most

(𝑛2)distance values. In each

iteration, the closest pair of clusters can be identified by using Algo-rithm 4 (with 𝑑 = 2 log(𝑛/𝛿)) to calculate the minimum over the setcontaining pairwise distances. For this algorithm, Lemma 5.1 showsthat the pair of clusters merged in any iteration are a constant ap-proximation of the optimal merge operation at that iteration. Theproof of this lemma follows from Theorem 3.6.

Lemma 5.1. Given a collection of clusters C = {𝐢1, . . . ,πΆπ‘Ÿ }, ouralgorithm to calculate the closest pair (using Algorithm 4) identifies𝐢1and𝐢2 to merge according to single linkage objective if 𝑑𝑆𝐿 (𝐢2,𝐢2) ≀(1 + `)3min𝐢𝑖 ,𝐢 𝑗 ∈C 𝑑 (𝐢𝑖 ,𝐢 𝑗 ) with 1 βˆ’ 𝛿/𝑛 probability and requires

𝑂 (𝑛2 log2 (𝑛/𝛿)) queries.

Algorithm 11 Greedy Algorithm1: Input : Set of points𝑉2: Output : Hierarchy 𝐻3: 𝐻 ← {{𝑣 } | 𝑣 ∈ 𝑉 }, C ← {{𝑣 } | 𝑣 ∈ 𝑉 }4: for𝐢𝑖 ∈ C do

5: 𝐢𝑖 ←NearestNeighbor of𝐢𝑖 among C \ {𝐢𝑖 } using Sec 3.36: while |C | > 1 do7: Let (𝐢 𝑗 ,𝐢 𝑗 ) be the closest pair among (𝐢𝑖 ,𝐢𝑖 ), βˆ€πΆπ‘– ∈ C8: 𝐢′ ← 𝐢 𝑗 βˆͺ𝐢 𝑗

9: Update Adjacency list of𝐢′ with respect to C10: Add𝐢′ as parent of𝐢 𝑗 and𝐢 𝑗 in 𝐻 .11: C ←

(C \ {𝐢 𝑗 ,𝐢 𝑗 }

)βˆͺ {𝐢′ }

12: 𝐢′ ← NearestNeighbor of𝐢′ from its adjacency list13: return 𝐻

Overview. Agglomerative clustering techniques are known to beinefficient. Each iteration of merge operation compares at most(𝑛2)pairs of distance values and the algorithm operates 𝑛 times to

construct the hierarchy. This yields an overall query complexity of𝑂 (𝑛3). To improve their query complexity, SLINK algorithm [47]was proposed to construct the hierarchy in 𝑂 (𝑛2) comparisons.To implement this algorithm with a comparison oracle, for everycluster 𝐢𝑖 ∈ C, we maintain an adjacency list containing everycluster𝐢 𝑗 in C along with a pair of records with the distance equalto the distance between the clusters. For example, the entry for 𝐢 𝑗

in the adjacency list of 𝐢𝑖 contains the pair of records (𝑣𝑖 , 𝑣 𝑗 ) suchthat 𝑑 (𝑣𝑖 , 𝑣 𝑗 ) = min𝑣𝑖 βˆˆπΆπ‘– ,𝑣𝑗 ∈𝐢 𝑗

𝑑 (𝑣𝑖 , 𝑣 𝑗 ). Algorithm 11 presents thepseudo code for single linkage clustering under the adversarialnoise model. The algorithm is initialized with singleton clusterswhere every record is a separate cluster. Then, we identify theclosest cluster for every𝐢𝑖 ∈ C, and denote it by𝐢𝑖 . This step takes

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

𝑛 nearest neighbor queries, each requiring 𝑂 (𝑛 log2 (𝑛/𝛿)) oraclequeries. In every subsequent iteration, we identify the closest pairof clusters (Using section 3.3), say 𝐢 𝑗 and 𝐢 𝑗 from C.

After merging these clusters, the data structure is updatedas follows. To update the adjacency list, we need the pair ofrecords with minimum distance between the merged cluster 𝐢 β€² ≑𝐢 𝑗 βˆͺ 𝐢 𝑗 and every other cluster πΆπ‘˜ ∈ C. In the previous iter-ation of the algorithm, we already have the minimum distancerecord pair for (𝐢 𝑗 ,πΆπ‘˜ ) and (𝐢 𝑗 ,πΆπ‘˜ ). Therefore a single querybetween these two pairs of records is sufficient to identify theminimum distance edge between 𝐢 β€² and πΆπ‘˜ (formally: 𝑑𝑆𝐿 (𝐢 𝑗 βˆͺ𝐢 𝑗 ,πΆπ‘˜ ) = min{𝑑𝑆𝐿 (𝐢 𝑗 ,πΆπ‘˜ ), 𝑑𝑆𝐿 (𝐢 𝑗 ,πΆπ‘˜ )}). The nearest neighbor ofthe merged cluster is identified by running minimum calculationover its adjacency list. In Algorithm 11, as we identify closest pairof clusters, each iteration requires 𝑂 (𝑛 log2 (𝑛/𝛿)) queries. As ourAlgorithm terminates in at most 𝑛 iterations, it has an overall querycomplexity of 𝑂 (𝑛2 log2 (𝑛/𝛿)). In Theorem 5.2, we given an ap-proximation guarantee for every merge operation of Algorithm 11.

Theorem 5.2. In any iteration, suppose the distance between a clus-ter 𝐢 𝑗 ∈ C and its identified nearest neighbor 𝐢 𝑗 is 𝛼-approximation

of its distance from the optimal nearest neighbor, then the distance

between pair of clusters merged by Algorithm 11 is 𝛼 (1 + `)3 approx-imation of the optimal distance between the closest pair of clusters in

C with a probability of 1 βˆ’ 𝛿 using 𝑂 (𝑛 log2 (𝑛/𝛿)) oracle queries.

Probabilistic Noise model. The above discussed algorithms donot extend to the probabilistic noise due to constant probabilityof error for each query. However, when we are given a priori, apartitioning of𝑉 into clusters of size > log𝑛 such that themaximumdistance between any pair of records in every cluster is smaller than𝛼 (a constant), Algorithm 11 can be used to construct the hierarchycorrectly. For this case, the algorithm to identify the closest andfarthest pair of clusters is same as the one discussed in Section 3.3.Note that agglomerative clustering algorithms are known to requireΞ©(𝑛2) queries, which can be infeasible for million scale datasets.However, blocking based techniques present efficient heuristics toprune out low similarity pairs [44]. Devising provable algorithmswith better time complexity is outside the scope of this work.

6 EXPERIMENTS

In this section, we evaluate the effectiveness of our techniques onvarious real world datasets and answer the following questions.Q1: Is quadruplet oracle practically feasible? How do the differenttypes of queries compare in terms of quality and time taken byannotators? Q2: Are proposed techniques robust to different levelsof noise in oracle answers? Q3: How does the query complexityand solution quality of proposed techniques compare with optimumfor varied levels of noise?

6.1 Experimental Setup

Datasets. We consider the following real-world datasets.(1) cities dataset [2] comprises of 36K cities of the United States.The different features of the cities include state, county, zip code,population, time zone, latitude and longitude.(2) caltech dataset comprises 11.4K images from 20 categories.The ground truth distance between records is calculated using the

hierarchical categorization as described in [29].(3) amazon dataset contains 7K images and textual descriptionscollected from amazon.com [31]. For obtaining the ground truthdistances we use Amazon’s hierarchical catalog.(4) monuments dataset comprises of 100 images belonging to 10tourist locations around the world.(5) dblp contains 1.8M titles of computer science papers from differ-ent areas [60]. From these titles, noun phrases were extracted anda dictionary of all the phrases was constructed. Euclidean distancein word2vec embedding space is considered as the ground truthdistance between concepts.Baselines. We compare our techniques with the optimal solution(whenever possible) and the following baselines. (a) Tour2 con-structs a binary tournament tree over the entire dataset to comparethe values and the root node corresponds to the identified maxi-mum/minimum value (Algorithm 2 with _ = 2). This approach isan adaptation of the finding maximum algorithm in [15] with adifference that each query is not repeated multiple times to increasesuccess probability. We also use them to identify the farthest andnearest point in the greedy π‘˜-center Algorithm 6 and closest pairof clusters in hierarchical clustering.

(b) Samp considers a sample ofβˆšπ‘› records and identifies the far-

thest/nearest by performing quadratic number of comparisons overthe sampled points using Count-Max. For π‘˜-center, Samp considersa sample of π‘˜ log𝑛 points to identify π‘˜ centers over these samplesusing the greedy algorithm. It then assigns all the remaining pointsto the identified centers by querying each record with every pairof center.

Calculating optimal clustering objective for π‘˜-center is NP-hardeven in the presence of accurate pairwise distance [59]. So, we com-pare the solution quality with respect to the greedy algorithm onthe ground truth distances, denoted by TDist. For farthest, nearestneighbor and hierarchical clustering, TDist denotes the optimaltechnique that has access to ground truth distance between records.

Our algorithm is labelled Far for farthest identification, NN fornearest neighbor, kC for π‘˜-center and HC for hierarchical clusteringwith subscript π‘Ž denoting the adversarial model and 𝑝 denotingthe probabilistic noise model. All algorithms are implemented inC++ and run on a server with 64GB RAM. The reported results areaveraged over 100 randomly chosen iterations. Unless specified, weset 𝑑 = 1 in Algorithm 4 and 𝛾 = 2 in Algorithm 7.Evaluation Metric. For finding maximum and nearest neighbors,we compare different techniques by evaluating the true distanceof the returned solution from the queried points. For π‘˜-center, weuse the objective value, i.e., maximum radius of the returned clus-ters as the evaluation metric and compare against the true greedyalgorithm (TDist) and other baselines. For datasets where groundtruth clusters are known (amazon, caltech and monuments), weuse F-score over intra-cluster pairs for comparing it with the base-lines [20]. For hierarchical clustering, we compute the pairs ofclusters merged in every iteration and compare the average truedistance between these clusters. In addition to the quality of re-turned solution, we compare the query complexity and runningtime of the proposed techniques with the baselines described above.Noise Estimation. For cities, amazon, caltech, and monumentsdatasets, we ran a user study on Amazon Mechanical Turk to esti-mate the noise in oracle answers over a small sample of the dataset,

How to Design Robust Algorithms using Noisy Comparison Oracle

0 3 5 6 7 8 9 10

0356789

10

0.0 0.2 0.4 0.6 0.8 1.0

(a) caltech

0 1 3 4 5 6 7 8 9 10 11 12

013456789

101112 0.0

0.2

0.4

0.6

0.8

1.0

(b) amazon

Figure 4: Accuracy values (denoted by the color of a cell)

for different distance ranges observed during our user study.

The diagonal entries refer to the quadruplets with similar

distance between the corresponding pairs and the distance

increases as we go further away from the diagonal.

often referred to as the validation set. Using crowd responses, wetrained a classifier (random forest [51] obtained the best results)using active learning to act as the quadruplet oracle, and reduce thenumber of queries to the crowd. Our active learning algorithm [50]uses a batch of 20 queries and we stop it when the classifier accu-racy on the validation set does not improve by more than 0.01 [26].To efficiently construct a small set of candidates for active learningand pruning low similarity pairs for dblp, we employ token basedblocking [44] for the datasets. For the synthetic oracle, we simulatequadruplet oracle with different values of the noise parameters.

6.2 User study

In this section, we evaluate the users ability to answer quadrupletqueries and compare it with other types of queries.Setup. We ran a user study on Amazon Mechanical Turk platformfor four datasets cities, amazon, caltech and monuments. We con-sider the ground truth distance between record pairs and discretizethem into buckets, and assign a pair of records to a bucket if thedistance falls within its range. For every pair of buckets, we query arandom subset of log𝑛 quadruplet oracle queries (where 𝑛 is size ofdataset). Each query is answered by three different crowd workersand a majority vote is taken as the answer to the query.

6.2.1 Qualitative Analysis of Oracle. In Figure 4, for every pair ofbuckets, using a heat map, we plot the accuracy of answers obtainedfrom the crowd workers for quadruplet queries. For all datasets,average accuracy of quadruplet queries is more than 0.83 and theaccuracy is minimum whenever both pairs of records belong to thesame bucket (as low as 0.5). However, we observe varied behavioracross datasets as the distance between considered pairs increases.

For the caltech dataset, we observe that when the ratio of thedistances is more than 1.45 (indicated by a black line in the Fig-ure 4(a)) , there is no noise (or close to zero noise) observed in thequery responses. As we observe a sharp decline in noise as thedistance between the pairs increases, it suggests that adversarialnoise is satisfied for this dataset. We observe a similar pattern forthe cities and monuments datasets. For the amazon dataset, weobserve that there is substantial noise across all distance ranges(See Figure 4(b)) rather than a sharp decline, suggesting that theprobabilistic model is satisfied.

0

0.2

0.4

0.6

0.8

1

1.2

cities caltechmonuments

amazon

Dis

tanc

e

Datasets

TdistFar

Tour2Samp

(a) Farthest,higher is better

0

0.2

0.4

0.6

0.8

1

1.2

cities caltechmonuments

amazon

Dis

tanc

e

Datasets

TdistNN

Tour2Samp

(b) Nearest Neighbor (NN), lower is betterFigure 5: Comparison of farthest and NN techniques for

crowdsourced oracle queries.

6.2.2 Comparison with pairwise querying mechanisms. To evaluatethe benefit of quadruplet queries, we compare the quality of quadru-plet comparison oracle answers with the following pairwise oraclequery models. (a) Optimal cluster query: This query asks questionsof type β€˜do 𝑒 and 𝑣 refer to same/similar type?’. (b) Distance query:How similar are the records π‘₯ and 𝑦? In this query, the annotatorscores the similarity of the pair within 1 to 10.We make the following observations. (i) Optimal cluster queriesare answered correctly only if the ground truth clusters refer todifferent entities (each cluster referring to a distinct entity). Crowdworkers tend to answer β€˜No’ if the pair of records refer to differententities. Therefore, we observe high precision (more than 0.90) butlow recall (0.50 on amazon and 0.30 on caltech for π‘˜ = 10) of thereturned labels. (ii) We observed very high variance in the distanceestimation query responses. For all record pairs with identical enti-ties, the users returned distance estimates that were within 20% ofthe correct distances. In all other cases, we observe the estimatesto have errors of upto 50%. We provide more detailed comparisonon the quality of clusters identified by pairwise query responsesalong with quadruplet queries in the next section.

6.3 Crowd Oracle: Solution Quality & Query

Complexity

In this section, we compare the quality of our proposed techniquesfor the datasets on which we performed the user study. Followingthe findings of Section 6.2, we use probabilistic model based algo-rithm for amazon (with 𝑝 = 0.50) and adversarial noise model basedalgorithm for caltech, monuments and cities.Finding Max and Farthest/Nearest Neighbor. Figure 5 com-pares the quality of farthest and nearest neighbor (NN) identified byproposed techniques along with other baselines. The values are nor-malized according to the maximum value to present all datasets onthe same scale. Across all datasets, the point identified by Far andNN is closest to the optimal value, TDist. In contrast, the farthest re-turned by Tour2 is better than that of Samp for cities dataset butnot for caltech, monuments and amazon. We found that this differ-ence in quality across datasets is due to varied distance distributionbetween pairs. The cities dataset has a skewed distribution ofdistance between record pairs, leading to a unique optimal solutionto the farthest/NN problem. Due to this reason, the set of recordssampled by Samp does not contain any record that is a good approx-imation of the optimal farthest. However, ground truth distancesbetween record pairs in amazon, monuments and caltech are lessskewed with more than log𝑛 records satisfying the optimal farthestpoint for all queries. Therefore, Samp performs better than Tour2

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

0 1000 2000 3000 4000 5000 6000 7000 8000 9000

0 20 40 60 80 100

(a) cities, Β΅=1

Obj

ectiv

e

k

kC

0

10

20

30

40

50

0 20 40 60 80 100

(d) dblp, Β΅=0.5

Obj

ectiv

e

k

Tour2

0 1000 2000 3000 4000 5000 6000 7000 8000 9000

0 20 40 60 80 100

(c) cities, p=0.1

Obj

ectiv

e

k

Samp

0

10

20

30

40

50

0 20 40 60 80 100

(d) dblp, p=0.1

Obj

ectiv

e

k

TDist

Figure 6: π‘˜-center clustering objective comparison for adversarial and probabilistic noise model.

0

0.5

1

1.5

2

2.5

3

cities caltechmonuments

amazon

Dis

tanc

e

Datasets

TdistHC

Tour2Samp

(a) Single Linkage

0

0.5

1

1.5

2

2.5

3

cities caltechmonuments

amazon

Dis

tanc

e

Datasets

TdistHC

Tour2Samp

(b) Complete LinkageFigure 7: Comparison of Hierarchical clustering techniques

with crowdsourced oracle.

on these datasets. We observe Samp performs worse for NN becauseour sample does not always contain the closest point.π‘˜-center Clustering. We evaluate the F-score2 of the clustersgenerated by our techniques along with baselines and techniquesfor pairwise optimal query mechanism (denoted as Oq)3. Table 1presents the summary of our results for different values of π‘˜ . Acrossall datasets, our technique achieves more than 0.90 F-score. On theother hand, Tour2 and Samp do not identify the ground truth clus-ters correctly, leading to low F-score. Similarly, Oq achieves poorrecall (and hence low F-score) as it labels many record pairs to be-long to separate clusters. For example, a frog and a butterfly belongto the same optimal cluster for caltech (k=10) but the two recordsare assigned to different clusters by Oq.Hierarchical Clustering. Figure 7 compares the average distanceof the merged clusters across different iterations of the agglomer-ative clustering algorithm. Tour2 has 𝑂 (𝑛3) complexity and doesnot run for cities dataset in less than 48 hrs. The objective valueof different techniques are normalized by the optimal value withTdist denoting 1. For all datasets, HC performs better than Sampand Tour2. Among datasets, the quality of hierarchies generatedfor monuments is similar for all techniques due to low noise.Query Complexity. To ensure scalability, we trained active learn-ing based classifier for all the aforementioned experiments. In total,amazon, cities, and caltech required 540 (cost: $32.40), 220 (cost:$13.20) and 280 (cost: $16.80) queries to the crowd respectively.

6.4 Simulated Oracle: Solution Quality & Query

Complexity

In this section, we compare the robustness of the techniques wherethe query response is simulated synthetically for given ` and 𝑝 .

2Optimal clusters are identified from the original source of the datasets (amazon andcaltech) and manually for monuments.3We report the results on the sample of queries asked to the crowd as opposed totraining a classifier because the classifier generates noisier results and has poorerF-score than the quality of labels generated by crowdsourcing

Technique kC Tour2 Samp Oq*caltech (π‘˜ = 10) 1 0.88 0.91 0.45caltech (π‘˜ = 15) 1 0.89 0.88 0.49caltech (π‘˜ = 20) 0.99 0.93 0.87 0.58monuments (π‘˜ = 5) 1 0.95 0.97 0.77amazon (π‘˜ = 7) 0.96 0.74 0.57 0.48amazon (π‘˜ = 14) 0.92 0.66 0.54 0.72

Table 1: F-score comparison of k-center clustering. Oq is

marked with βˆ— as it was computed on a sample of 150 pair-

wise queries to the crowd3. All other techniques were run on

the complete dataset using a classifier.

0 1000 2000 3000 4000 5000 6000 7000 8000

0 0.5 1 2

Dis

tanc

e

Β΅

TdistFar

Tour2Samp

(a) cities–Adversarial

3000

4000

5000

6000

7000

0 0.1 0.3

Dis

tanc

e

p

TdistFar

Tour2Samp

(b) cities–ProbabilisticFigure 8: Comparison of farthest identification techniques

for adversarial and probabilistic noise models.

Finding Max and Farthest/Nearest Neighbor. In Figure 8(a),` = 0 denotes the setting where the oracle answers all queriescorrectly. In this case, Far and Tour2 identify the optimal solutionbut Samp does not identify the optimal solution for cities. In bothdatasets, Far identifies the correct farthest point for ` < 1. Evenwith an increase in noise (`), we observe that the farthest is alwaysat a distance within 4 times the optimal distance (See Fig 8(a)). Weobserve that the quality of farthest identified by Tour2 is close tothat of Far for smaller ` because the optimal farthest point 𝑣maxhas only a few points in the confusion region𝐢 (See Section 3) thatcontains the points that are close to 𝑣max. For e.g., less than 10%are present in 𝐢 when ` = 1 for cities dataset, i.e., less than 10%points return erroneous answer when compared with 𝑣max.In Figure 8(b), we compare the true distance of the identified farthestpoints for the case of probabilistic noise with error probability 𝑝 .We observe that Far𝑝 identifies points with distance values veryclose to the farthest distance Tdist, across all data sets and errorvalues. This shows that Far performs significantly better than thetheoretical approximation presented in Section 3. On the otherhand, the solution returned by Samp is more than 4Γ— smaller thanthe value returned by Far𝑝 for an error probability of 0.3. Tour2 hasa similar performance as that of Far𝑝 for 𝑝 ≀ 0.1, but we observea decline in solution quality for higher noise (𝑝) values.

How to Design Robust Algorithms using Noisy Comparison Oracle

0

5

10

15

20

25

0 0.5 1 2

Dis

tanc

e

Β΅

TdistNN

Tour2

(a) cities–Adversarial

0

20

40

60

80

100

120

140

0 0.1 0.3

Dis

tanc

e

p

TdistNN

Tour2

(b) cities–ProbabilisticFigure 9: Comparison of nearest neighbor techniques for

adversarial and probabilistic noise model (lower is better).

In Figures 9(a), 9(b), we compare the true distance of the identi-fied nearest neighbor with different baselines.

NN shows superior performance as compared to Tour2 acrossall error values. This justifies the lack of robustness of Tour2 asdiscussed in Section 3. The solution quality of NN does not worsenwith increase in error. We omit Samp from the plots because thereturned points had very poor performance (as bad as 700 evenin the absence of error). We observed similar behavior for otherdatasets. In terms of query complexity, NN requires around 53 Γ— 103queries for cities dataset and the number of queries grow linearlywith the dataset size. Among baselines, Tour2 uses 37Γ— 103 queriesand Samp uses 18 Γ— 103.β€œIn conclusion, we observe that our techniques achieve the best quality

across all data sets and error values, while Tour2 performs similar to

Far for low error, and its quality degrades with increasing error."

π‘˜-center Clustering. Figure 6 compares the π‘˜-center objective ofthe returned clusters for varying π‘˜ in the adversarial and probabilis-tic noise model. Tdist denotes the best possible clustering objective,which is guaranteed to be a 2-approximation of the optimal objec-tive. The set of clusters returned by kC are consistently very closeto TDist across all datasets, validating the theory. For higher valuesof π‘˜ , kC approaches closer to TDist, thereby improving the approx-imation guarantees. The quality of clusters identified by kC aresimilar to that of Tour2 and Far for adversarial noise (Figure 6a,b)but considerably better for probabilistic noise (Figure 6c,d).Running time. Table 2 compares the running time and the numberof required quadruplet comparisons for various problems underadversarial noise model with ` = 1 for the largest dblp dataset.Far and NN requires less than 6 seconds for both adversarial andprobabilistic error models. Our π‘˜-center clustering technique re-quires less than 450 min to identify 50 centers for dblp datasetacross different noise models; the running time grows linearly withπ‘˜ . While the running time of our algorithms are slightly higherthan Tour2 for farthest, nearest and π‘˜-center, Tour2 did not finishin 48 hrs due to𝑂 (𝑛3) running time for single and complete linkagehierarchical clustering. We observe similar performance for theprobabilistic noise model. Note that even though the number ofcomparisons are in millions, this dataset requires only 740 queriesto the crowd workers to train the classifier.

7 CONCLUSION

In this paper, we show how algorithms for various basic taskssuch as finding maximum, nearest neighbor, π‘˜-center clustering,and agglomerative hierarchical clustering can be designed usingdistance based comparison oracle in presence of noise. We believe

Problem Our Approach Tour2 SampTime # Comp Time # Comp Time # Comp

Farthest 0.1 2.2M 0.06 2M 0.07 1MNearest 0.075 2M 0.07 2M 0.61 1MkC (k=50) 450 120M 375.3 95M 477 105M

Single Linkage 1813 990M DNF 1760 940MComplete Linkage 1950 940M DNF 1940 920M

Table 2: Running time (in minutes) and number of quadru-

plet comparisons (denoted by # Comp, in millions) of differ-

ent techniques for dblp dataset under the adversarial noise

model with ` = 1. DNF denotes β€˜did not finish’.

our techniques can be useful for other clustering tasks such asπ‘˜-means and π‘˜-median, and we leave those as future work.

REFERENCES

[1] Google vision api https://cloud.google.com/vision.[2] United states cities database. https://simplemaps.com/data/us-cities.[3] Nir Ailon, Anup Bhattacharya, Ragesh Jaiswal, and Amit Kumar. Approximate

clustering with same-cluster queries. In 9th Innovations in Theoretical Computer

Science Conference (ITCS 2018), volume 94, page 40. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2018.

[4] MiklΓ³s Ajtai, Vitaly Feldman, Avinatan Hassidim, and Jelani Nelson. Sorting andselection with imprecise comparisons. In International Colloquium on Automata,

Languages, and Programming, pages 37–48. Springer, 2009.[5] Akhil Arora, Sakshi Sinha, Piyush Kumar, and Arnab Bhattacharya. Hd-index:

pushing the scalability-accuracy boundary for approximate knn search in high-dimensional spaces. Proceedings of the VLDB Endowment, 11(8):906–919, 2018.

[6] Hassan Ashtiani, Shrinu Kushagra, and Shai Ben-David. Clustering with same-cluster queries. In Advances in neural information processing systems, pages3216–3224, 2016.

[7] Shai Ben-David. Clustering-what both theoreticians and practitioners are doingwrong. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.

[8] Mark Braverman, Jieming Mao, and S Matthew Weinberg. Parallel algorithmsfor select and partition with noisy comparisons. In Proceedings of the forty-eighth

annual ACM symposium on Theory of Computing, pages 851–862, 2016.[9] Mark Braverman and Elchanan Mossel. Noisy sorting without resampling. In

Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms,pages 268–276. Society for Industrial and Applied Mathematics, 2008.

[10] Marco Bressan, NicolΓ² Cesa-Bianchi, Andrea Paudice, and Fabio Vitale. Correla-tion clustering with adaptive similarity queries. InAdvances in Neural Information

Processing Systems, pages 12510–12519, 2019.[11] Vaggos Chatziafratis, Rad Niazadeh, and Moses Charikar. Hierarchical clustering

with structural constraints. arXiv preprint arXiv:1805.09476, 2018.[12] I Chien, Chao Pan, and Olgica Milenkovic. Query k-means clustering and the

double dixie cup problem. In Advances in Neural Information Processing Systems,pages 6649–6658, 2018.

[13] Tuhinangshu Choudhury, Dhruti Shah, and Nikhil Karamchandani. Top-mclustering with a noisy oracle. In 2019 National Conference on Communications

(NCC), pages 1–6. IEEE, 2019.[14] Eleonora Ciceri, Piero Fraternali, Davide Martinenghi, and Marco Tagliasacchi.

Crowdsourcing for top-k query processing over uncertain data. IEEE Transactionson Knowledge and Data Engineering, 28(1):41–53, 2015.

[15] Susan Davidson, Sanjeev Khanna, Tova Milo, and Sudeepa Roy. Top-k andclustering with noisy comparisons. ACM Trans. Database Syst., 39(4), December2015.

[16] Eyal Dushkin and Tova Milo. Top-k sorting under partial order information. InProceedings of the 2018 International Conference on Management of Data, pages1007–1019, 2018.

[17] Ehsan Emamjomeh-Zadeh and David Kempe. Adaptive hierarchical clusteringusing ordinal queries. In Proceedings of the Twenty-Ninth Annual ACM-SIAM

Symposium on Discrete Algorithms, pages 415–429. SIAM, 2018.[18] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with

noisy information. SIAM Journal on Computing, 23(5):1001–1018, 1994.[19] Donatella Firmani, Barna Saha, and Divesh Srivastava. Online entity resolution

using an oracle. PVLDB, 9(5):384–395, 2016.[20] Sainyam Galhotra, Donatella Firmani, Barna Saha, and Divesh Srivastava. Robust

entity resolution using random graphs. In Proceedings of the 2018 International

Conference on Management of Data, pages 3–18, 2018.[21] Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Sorting

with recurrent comparison errors. In 28th International Symposium on Algo-

rithms and Computation (ISAAC 2017). Schloss Dagstuhl-Leibniz-Zentrum fuerInformatik, 2017.

[22] Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimalsortingwith persistent comparison errors. In 27th Annual European Symposium on

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

Algorithms (ESA 2019), volume 144, page 49. Schloss Dagstuhl-Leibniz-ZentrumfΓΌr Informatik, 2019.

[23] Barbara Geissmann, Stefano Leucci, Chih-Hung Liu, and Paolo Penna. Optimaldislocation with persistent errors in subquadratic time. Theory of Computing

Systems, 64(3):508–521, 2020.[24] Debarghya Ghoshdastidar, MichaΓ«l Perrot, and Ulrike von Luxburg. Foundations

of comparison-based hierarchical clustering. In Advances in Neural Information

Processing Systems, pages 7454–7464, 2019.[25] Yogesh Girdhar and Gregory Dudek. Efficient on-line data summarization using

extremum summaries. In 2012 IEEE International Conference on Robotics and

Automation, pages 3490–3496. IEEE, 2012.[26] Chaitanya Gokhale, Sanjib Das, AnHai Doan, Jeffrey F Naughton, Narasimhan

Rampalli, Jude Shavlik, and Xiaojin Zhu. Corleone: Hands-off crowdsourcing forentity matching. In Proceedings of the 2014 ACM SIGMOD international conference

on Management of data, pages 601–612, 2014.[27] Teofilo F Gonzalez. Clustering to minimize the maximum intercluster distance.

Theoretical Computer Science, 38:293–306, 1985.[28] Kasper Green Larsen, Michael Mitzenmacher, and Charalampos Tsourakakis.

Clustering with a faulty oracle. In Proceedings of The Web Conference 2020,WWW ’20, page 2831–2834, New York, NY, USA, 2020. Association for ComputingMachinery.

[29] Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object categorydataset. 2007.

[30] Stephen Guo, Aditya Parameswaran, and Hector Garcia-Molina. So who won?dynamic max discovery with the crowd. In Proceedings of the 2012 ACM SIGMOD

International Conference on Management of Data, pages 385–396, 2012.[31] Ruining He and Julian McAuley. Ups and downs: Modeling the visual evolution

of fashion trends with one-class collaborative filtering. In proceedings of the 25th

international conference on world wide web, pages 507–517, 2016.[32] Max Hopkins, Daniel Kane, Shachar Lovett, and Gaurav Mahajan. Noise-

tolerant, reliable active classification with comparison queries. arXiv preprintarXiv:2001.05497, 2020.

[33] Wasim Huleihel, Arya Mazumdar, Muriel MΓ©dard, and Soumyabrata Pal. Same-cluster querying for overlapping clusters. In Advances in Neural Information

Processing Systems, pages 10485–10495, 2019.[34] Christina Ilvento. Metric learning for individual fairness. arXiv preprint

arXiv:1906.00250, 2019.[35] Ehsan Kazemi, Lin Chen, Sanjoy Dasgupta, and Amin Karbasi. Comparison based

learning from weak oracles. arXiv preprint arXiv:1802.06942, 2018.[36] Taewan Kim and Joydeep Ghosh. Relaxed oracles for semi-supervised clustering.

arXiv preprint arXiv:1711.07433, 2017.[37] Taewan Kim and Joydeep Ghosh. Semi-supervised active clustering with weak

oracles. arXiv preprint arXiv:1709.03202, 2017.[38] Rolf Klein, Rainer Penninger, Christian Sohler, and David P Woodruff. Tolerant

algorithms. In European Symposium on Algorithms, pages 736–747. Springer,2011.

[39] MatthΓ€us Kleindessner, Pranjal Awasthi, and Jamie Morgenstern. Fair k-centerclustering for data summarization. In International Conference on Machine Learn-

ing, pages 3448–3457, 2019.[40] Ngai Meng Kou, Yan Li, Hao Wang, Leong Hou U, and Zhiguo Gong. Crowd-

sourced top-k queries by confidence-aware pairwise judgments. In Proceedings of

the 2017 ACM International Conference on Management of Data, pages 1415–1430,2017.

[41] Blake Mason, Ardhendu Tripathy, and Robert Nowak. Learning nearest neighborgraphs from noisy distance samples. In Advances in Neural Information Processing

Systems, pages 9586–9596, 2019.[42] Arya Mazumdar and Barna Saha. Clustering with noisy queries. In Advances in

Neural Information Processing Systems, pages 5788–5799, 2017.[43] Arya Mazumdar and Barna Saha. Query complexity of clustering with side

information. In Advances in Neural Information Processing Systems, pages 4682–4693, 2017.

[44] George Papadakis, Jonathan Svirsky, Avigdor Gal, and Themis Palpanas. Com-parative analysis of approximate blocking techniques for entity resolution. Pro-ceedings of the VLDB Endowment, 9(9):684–695, 2016.

[45] Vassilis Polychronopoulos, Luca De Alfaro, James Davis, Hector Garcia-Molina,and Neoklis Polyzotis. Human-powered top-k lists. InWebDB, pages 25–30, 2013.

[46] DraΕΎen Prelec, H Sebastian Seung, and John McCoy. A solution to the single-question crowd wisdom problem. Nature, 541(7638):532–535, 2017.

[47] Robin Sibson. Slink: an optimally efficient algorithm for the single-link clustermethod. The computer journal, 16(1):30–34, 1973.

[48] Omer Tamuz, Ce Liu, Serge Belongie, Ohad Shamir, and Adam Tauman Kalai.Adaptively learning the crowd kernel. In Proceedings of the 28th International

Conference on International Conference on Machine Learning, pages 673–680, 2011.[49] Antti Ukkonen. Crowdsourced correlation clustering with relative distance

comparisons. In 2017 IEEE International Conference on Data Mining (ICDM), pages1117–1122. IEEE, 2017.

[50] https://modal-python.readthedocs.io/en/latest/. modal library.[51] https://scikit-learn.org/stable/. Scikit-learn.

[52] Vijay V Vazirani. Approximation algorithms. Springer Science & Business Media,2013.

[53] Petros Venetis, Hector Garcia-Molina, Kerui Huang, and Neoklis Polyzotis. Maxalgorithms in crowdsourcing environments. In Proceedings of the 21st internationalconference on World Wide Web, pages 989–998, 2012.

[54] Victor Verdugo. Skyline computation with noisy comparisons. In Combinatorial

Algorithms: 31st International Workshop, IWOCA 2020, Bordeaux, France, June

8–10, 2020, Proceedings, page 289. Springer.[55] Vasilis Verroios and Hector Garcia-Molina. Entity resolution with crowd errors.

In 2015 IEEE 31st International Conference on Data Engineering, pages 219–230.IEEE, 2015.

[56] Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. Crowdsourcing algorithmsfor entity resolution. Proceedings of the VLDB Endowment, 7(12):1071–1082, 2014.

[57] Ramya Korlakai Vinayak and Babak Hassibi. Crowdsourced clustering: Queryingedges vs triangles. In Advances in Neural Information Processing Systems, pages1316–1324, 2016.

[58] Jiannan Wang, Tim Kraska, Michael J Franklin, and Jianhua Feng. Crowder:Crowdsourcing entity resolution. Proceedings of the VLDB Endowment, 5(11),2012.

[59] David PWilliamson and David B Shmoys. The design of approximation algorithms.Cambridge university press, 2011.

[60] Chao Zhang, Fangbo Tao, Xiusi Chen, Jiaming Shen, Meng Jiang, Brian Sadler,Michelle Vanni, and Jiawei Han. Taxogen: Unsupervised topic taxonomy con-struction by adaptive term embedding and clustering. In Proceedings of the 24th

ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,pages 2701–2709, 2018.

How to Design Robust Algorithms using Noisy Comparison Oracle

8 FINDING MAXIMUM

Lemma 8.1. (Hoeffding’s Inequality) If 𝑋1, 𝑋2, Β· Β· Β· , 𝑋𝑛 are independent random variables with π‘Žπ‘– ≀ 𝑋𝑖 ≀ 𝑏𝑖 for all 𝑖 ∈ [𝑛], then

Pr

[οΏ½οΏ½οΏ½οΏ½οΏ½βˆ‘οΈπ‘–

𝑋𝑖 βˆ’ E[𝑋𝑖 ]οΏ½οΏ½οΏ½οΏ½οΏ½ β‰₯ π‘›πœ–

]≀ 2 exp

(βˆ’ 2𝑛2πœ–2βˆ‘

𝑖 (𝑏𝑖 βˆ’ π‘Žπ‘– )2

)

8.1 Adversarial Noise

Let the maximum value among 𝑉 be denoted by 𝑣max and the set of records for which the oracle answer can be incorrect is given by

𝐢 = {𝑒 | 𝑒 ∈ 𝑉 ,𝑒 β‰₯ 𝑣max1 + ` }

Claim 8.2. For any partition 𝑉𝑖 , Tournament(𝑉𝑖 ) uses at most 2|𝑉𝑖 | oracle queries.

Proof. Consider the 𝑖th round in Tournament. We can observe that the number of remaining values is at most |𝑉𝑖 |2𝑖 . So, we make |𝑉𝑖 |2𝑖+1many oracle queries in this round. Total number of oracle queries made is

logπ‘›βˆ‘οΈπ‘–=0

|𝑉𝑖 |2𝑖+1

≀ 2|𝑉𝑖 |

β–‘

Lemma 8.3. Given a set of values 𝑆 , Count-Max(𝑆) returns a (1 + `)2 approximation of maximum value of 𝑆 using 𝑂 ( |𝑆 |2) oracle queries.

Proof. Let 𝑣max = max{π‘₯ ∈ 𝑆}. Consider a value 𝑀 ∈ 𝑆 such that 𝑀 <𝑣max(1+`)2 . We compare the Count values for 𝑣max and 𝑀 given

by, Count(𝑣max, 𝑆) =βˆ‘π‘₯ βˆˆπ‘† 1{O(𝑣max, π‘₯) == No} and Count(𝑀, 𝑆) = βˆ‘

π‘₯ βˆˆπ‘† 1{O(𝑀, π‘₯) == No}. We argue that 𝑀 can never be returned byAlgorithm 1, i.e., Count(𝑀, 𝑆) < Count(𝑣max, 𝑆).

Count(𝑣max, 𝑆) =βˆ‘οΈπ‘₯ βˆˆπ‘†

1{O(𝑣max, π‘₯) == No} β‰₯βˆ‘οΈ

π‘₯ βˆˆπ‘†\{𝑣max }1{π‘₯ < 𝑣max/(1 + `)}

= 1{O(𝑣max,𝑀) == No} +βˆ‘οΈ

π‘₯ βˆˆπ‘†\{𝑣max,𝑀 }1{π‘₯ < 𝑣max/(1 + `)}

= 1 +βˆ‘οΈ

π‘₯ βˆˆπ‘†\{𝑣max,𝑀 }1{π‘₯ < 𝑣max/(1 + `)}

Count(𝑀, 𝑆) =βˆ‘οΈπ‘¦βˆˆπ‘†

1{O(𝑀,𝑦) == No} =βˆ‘οΈ

π‘¦βˆˆπ‘†\{𝑀,𝑣max }1{O(𝑀,𝑦) == No}

β‰€βˆ‘οΈ

π‘¦βˆˆπ‘†\{𝑀,𝑣max }1{𝑦 ≀ (1 + `)𝑀}

β‰€βˆ‘οΈ

π‘¦βˆˆπ‘†\{𝑀,𝑣max }1{𝑦 ≀ 𝑣max/(1 + `)}

Combining the two, we have :Count(𝑣max, 𝑆) > Count(𝑀, 𝑆)

This shows that the Countof 𝑣max is strictly greater than the count of any point𝑀 with𝑀 <𝑣max(1+`)2 . Therefore, our algorithm would have

output 𝑣max instead of𝑀 . For calculating the Count for all values in 𝑆 , we make at most |𝑆 |2 oracle queries as we compare every value withevery other value. Finally, we output the maximum value as the value with highest Count. Hence, the claim. β–‘

Lemma 8.4 (Lemma 3.3 restated). Suppose π‘£π‘šπ‘Žπ‘₯ is the maximum value among the set of records 𝑉 . Algorithm 2 outputs a value π‘’π‘šπ‘Žπ‘₯ such

that π‘’π‘šπ‘Žπ‘₯ β‰₯ π‘£π‘šπ‘Žπ‘₯

(1+`)2 log_ 𝑛 using 𝑂 (𝑛_) oracle queries.

Proof. From Lemma 8.3, we have that we lose a factor of (1+ `)2 in each level of the tournament tree, we have that after log_ 𝑛 levels, thefinal output will have an approximation guarantee of (1 + `)2 log_ 𝑛 . The total number of queries used is given by :

βˆ‘log_ 𝑛𝑖=0

|𝑉𝑖 |__2 = 𝑂 (𝑛_)

where 𝑉𝑖 is the number of records at level 𝑖 . β–‘

Lemma 8.5. Suppose |𝐢 | >βˆšπ‘›/2. Let 𝑉 denote a set of 2

βˆšπ‘› log(2/𝛿) samples obtained by uniform sampling with replacement from 𝑉 . Then,

𝑉 contains a (1 + `) approximation of the maximum value 𝑣max, with probability 1 βˆ’ 𝛿/2.

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

Proof. Consider the first step where we use a uniformly random sample 𝑉 ofβˆšπ‘›π‘‘ = 2

βˆšπ‘› log(2/𝛿) values from 𝑉 (obtained by sampling

with replacement). Given |𝐢 | β‰₯βˆšπ‘›2 , probability that 𝑉 contains a value from 𝐢 is given by

Pr[𝑉 ∩𝐢 β‰  πœ™] = 1 βˆ’(1 βˆ’ |𝐢 |

𝑛

) |𝑉 |> 1 βˆ’

(1 βˆ’ 1

2βˆšπ‘›

)2βˆšπ‘› log(2/𝛿)> 1 βˆ’ 𝛿/2

So, with probability 1 βˆ’ 𝛿/2, there exists a value 𝑒 ∈ 𝐢 βˆ©π‘‰ . Hence, the claim. β–‘

Lemma 8.6. Suppose the partition 𝑉𝑖 contains the maximum value 𝑣max of 𝑉 . If |𝐢 | β‰€βˆšπ‘›/2, then, Tournament(𝑉𝑖 ) returns the 𝑣max with

probability 1/2.

Proof. Algorithm 4 uses a modified tournament tree that partitions the set 𝑉 into 𝑙 =βˆšπ‘› parts of size 𝑛

𝑙=βˆšπ‘› each and identifies a

maximum 𝑝𝑖 from each partition 𝑉𝑖 using Algorithm 2. If 𝑣max ∈ 𝑉𝑖 , then,

E[|𝐢 βˆ©π‘‰π‘– |] =|𝐢 |𝑙

=

βˆšπ‘›

2βˆšπ‘›=

12

Using Markov’s inequality, the probability that 𝑉𝑖 contains a value from 𝐢 is given by :

Pr[|𝐢 βˆ©π‘‰π‘– | β‰₯ 1] ≀ E[|𝐢 βˆ©π‘‰π‘– |] ≀12

Therefore, with at least a probability of 12 , 𝑣max will never be compared with any point from 𝐢 in the partition 𝑉𝑖 containing 𝑣max. Hence,

𝑣max is returned by Tournament(𝑉𝑖 ) with probability 1/2.β–‘

Lemma 8.7 (Lemma 3.5 restated). (1) If |𝐢 | >βˆšπ‘›/2, then there exists a value 𝑣 𝑗 ∈ 𝑉 satisfying 𝑣 𝑗 β‰₯ 𝑣max/(1 + `) with a probability of

1 βˆ’ 𝛿/2.(2) Suppose |𝐢 | ≀

βˆšπ‘›/2. Then, 𝑇 contains 𝑣max with a probability at least 1 βˆ’ 𝛿/2.

Proof. Claim (1) follows from Lemma 8.5.

In every iteration 𝑖 ≀ 𝑑 of Algorithm 4, we have that 𝑣max ∈ 𝑇𝑖 with probability 12 (Using Lemma 8.6). To increase the success probability,

we run this procedure 𝑑 times and obtain all the outputs. Among the 𝑑 = 2 log(2/𝛿) runs of Algorithm 2, we have that 𝑣max is never comparedwith any value of 𝐢 in atleast one of the iterations with a probability at least

1 βˆ’ (1 βˆ’ 1/2)2 log(2/𝛿) β‰₯ 1 βˆ’ 𝛿

2

Hence, 𝑇 = βˆͺ𝑖𝑇𝑖 contains 𝑣max with a probability 1 βˆ’ 𝛿2 . β–‘

Theorem 8.8 (Theorem 3.6 restated). Given a set of values 𝑉 , Algorithm 4 returns a (1 + `)3 approximation of maximum value with

probability 1 βˆ’ 𝛿 using 𝑂 (𝑛 log2 (1/𝛿)) oracle queries.

Proof. In Algorithm 4, we first identify an approximate maximum value using Sampling. If |𝐢 | β‰₯βˆšπ‘›2 , then, from Lemma 8.5, we have that

the value returned is a (1 + `) approximation of the maximum value of 𝑉 . Othwerwise, from Lemma 8.7, 𝑇 contains 𝑣max with a probability1 βˆ’ 𝛿/2. As we use Count-Max on the set 𝑉 βˆͺ𝑇 , we know that the value returned, i.e., 𝑒max is a (1 + `)2 of the maximum among valuesfrom 𝑉 βˆͺ𝑇 . Therefore, 𝑒max β‰₯ 𝑣max

(1+`)3 . Using union bound, the total probability of failure is 𝛿 .

For query complexity, Algorithm 3 obtains a set 𝑉 ofβˆšπ‘›π‘‘ sample values. Along with the set 𝑇 obtained (where |𝑇 | = 𝑛𝑑

𝑙), we use

Count-Max on 𝑉 βˆͺ𝑇 to output the maximum 𝑒max. This step requires 𝑂 ( |𝑉 βˆͺ𝑇 |2) = 𝑂 ((βˆšπ‘›π‘‘ + 𝑛𝑑

𝑙)2) oracle queries. In an iteration 𝑖 , for

obtaining𝑇𝑖 , we make𝑂 (βˆ‘π‘— |𝑉𝑗 |) = 𝑂 (𝑛) oracle queries (Claim 8.2), and for 𝑑 iterations, we make𝑂 (𝑛𝑑) queries. Using 𝑑 = 2 log(2/𝛿), 𝑙 =βˆšπ‘›,

in total, we make 𝑂 (𝑛𝑑 + (βˆšπ‘›π‘‘ + 𝑛𝑑

𝑙)2) = 𝑂 (𝑛 log2 (1/𝛿)) oracle queries. Hence, the theorem.

β–‘

8.2 Probabilistic Noise

Lemma 8.9. Suppose the maximum value 𝑒max is returned by Algorithm 2 with parameters (𝑉 ,𝑛). Then, rank(𝑒max,𝑉 ) = 𝑂 (βˆšοΈπ‘› log(1/𝛿))

with a probability of 1 βˆ’ 𝛿 .

Proof. We have for the maximum value 𝑣max, expected count value :

E[Count(𝑣max,𝑉 )] =βˆ‘οΈπ‘€βˆˆπ‘‰

1{O(𝑀, 𝑣max) == 𝑀} = (𝑛 βˆ’ 1) (1 βˆ’ 𝑝)

How to Design Robust Algorithms using Noisy Comparison Oracle

Using Hoeffding’s inequality, with probability 1 βˆ’ 𝛿/2 :

Count(𝑣max,𝑉 ) β‰₯ (𝑛 βˆ’ 1) (1 βˆ’ 𝑝) βˆ’βˆšοΈ((𝑛 βˆ’ 1) log(2/𝛿))/2

Consider a record 𝑒 ∈ 𝑉 with rank at most 5√︁2𝑛 log(2/𝛿). Then,

E[Count(𝑒,𝑉 )] =βˆ‘οΈπ‘€βˆˆπ‘‰

1{O(𝑒, 𝑣max) == 𝑀} = (𝑛 βˆ’ rank(𝑒)) (1 βˆ’ 𝑝) + (rank(𝑒) βˆ’ 1)𝑝

Using Hoeffding’s inequality, with probability 1 βˆ’ 𝛿/2 :

Count(𝑒,𝑉 ) < (𝑛 βˆ’ 1) (1 βˆ’ 𝑝) βˆ’ (rank(𝑒) βˆ’ 1) (1 βˆ’ 2𝑝) +√︁0.5(𝑛 βˆ’ 1) log(2/𝛿)

< (𝑛 βˆ’ 1) (1 βˆ’ 𝑝) βˆ’ (5√︁2𝑛 log(2/𝛿) βˆ’ 1) (1 βˆ’ 2𝑝) +

√︁0.5(𝑛 βˆ’ 1) log(2/𝛿)

< Count(𝑣max,𝑉 )

The last inequality is true for a value of 𝑝 ≀ 0.4. As Algorithm 2 returns the record 𝑒max with maximum Count value, we have thatrank(𝑒max,𝑉 ) = 𝑂 (

βˆšοΈπ‘› log(1/𝛿)). Using union bound, for the above conditions to be met, we have the claim. β–‘

To improve the query complexity, we use an early stopping criteria that discards a value π‘₯ using the Count(π‘₯,𝑉 ) when it determines thatπ‘₯ has no chance of being the maximum. Algorithm 12 presents the psuedocode for this modified count calculation. We sample 100 log(𝑛/𝛿)values randomly, denoted by 𝑆𝑑 and compare every non-sampled point with 𝑆𝑑 . We argue that by doing so, it helps us eliminate the valuesthat are far away from the maximum in the sorted ranking. Using Algorithm 12, we compare the Count scores with respect to 𝑆𝑑 of a value𝑒 ∈ 𝑉 \ 𝑆𝑑 and if Count(𝑒, 𝑆𝑑 ) β‰₯ 50 log(𝑛/𝛿), we make it available for the subsequent iterations.

Algorithm 12 Count-Max-Prob : Maximum with Probabilistic Noise1: Input : A set 𝑉 of 𝑛 values, failure probability 𝛿 .2: Output : An approximate maximum value of 𝑉3: 𝑑 ← 14: while 𝑑 < log(𝑛) or |𝑉 | > 100 log(𝑛/𝛿) do5: 𝑆𝑑 denote a set of 100 log(𝑛/𝛿) values obtained by sampling uniformly at random from 𝑉 with replacement.6: Set 𝑋 ← πœ™

7: for 𝑒 ∈ 𝑉 \ 𝑆𝑑 do8: if Count(𝑒, 𝑆𝑑 ) β‰₯ 50 log(𝑛/𝛿) then

9: 𝑋 ← 𝑋 βˆͺ {𝑒}10: 𝑉 ← 𝑋, 𝑑 ← 𝑑 + 111: 𝑒max ← Count-Max(𝑉 )12: return 𝑒max

As Algorithm 12 considers each value 𝑒 ∈ 𝑉 \ 𝑆𝑑 by iteratively comparing it with each value π‘₯ ∈ 𝑆𝑑 and the error probability is less than 𝑝 ,the expected count of 𝑣max (if it is available) at any iteration 𝑑 is (1βˆ’ 𝑝) |𝑆𝑑 |. Accounting for the deviation around the expected value, we havethat the Count(𝑣max, 𝑆𝑑 ) is at least 50 log(𝑛/𝛿) when 𝑝 ≀ 0.44. If a particular value 𝑒 has Count(𝑒, 𝑆𝑑 ) < 50 log(𝑛/𝛿) in any iteration, i.e.,then it can not be the largest value in 𝑉 and therefore, we remove it from the set of possible candidates for maximum. Therefore, any valuethat remains in 𝑉 after an iteration 𝑑 , must have rank closer to that of 𝑣max. We argue that after every iteration, the number of candidatesremaining is at most 1/60th of the possible candidates.

Lemma 8.10. In an iteration 𝑑 containing 𝑛𝑑 remaining records, using Algorithm 5, with probability 1 βˆ’ 𝛿/𝑛, we discard at least 5960 Β· 𝑛𝑑 records.

Proof. Consider an iteration 𝑑 which has 𝑛𝑑 remaining records. Algorithm 5 and a record 𝑒 with rank 𝛼 Β· 𝑛𝑑 . Now, we have :

E[Count(𝑒, 𝑆𝑑 )] = ((1 βˆ’ 𝛼) (1 βˆ’ 𝑝) + 𝛼𝑝)100 log(𝑛/𝛿)

For 𝛼 = 0, i.e., we have for maximum value 𝑣max

E[Count(𝑣max, 𝑆𝑑 )] = (1 βˆ’ 𝑝)100 log(𝑛/𝛿)

Using 𝑝 ≀ 0.4 and Hoeffding’s inequality, with probability 1 βˆ’ 𝛿/𝑛2 , we have :

Count(𝑣max, 𝑆𝑑 ) β‰₯ (1 βˆ’ 𝑝)100 log(𝑛/𝛿) βˆ’βˆš100 log(𝑛/𝛿) β‰₯ 50 log(𝑛/𝛿)

4The constants 50, 100 etc. are not optimized and set just to satisfy certain concentration bounds.

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

For 𝑒, we calculate the Count value. Using 𝑝 ≀ 0.4 and Hoeffding’s inequality, with probability 1 βˆ’ 𝛿/𝑛2 , we have :

Count(𝑒, 𝑆𝑑 ) < ((1 βˆ’ 𝛼) (1 βˆ’ 𝑝) + 𝛼𝑝)100 log(𝑛/𝛿) +√︁100((1 βˆ’ 𝛼) (1 βˆ’ 𝑝) + 𝛼𝑝) log(𝑛/𝛿)

< ((1 βˆ’ 0.6𝛼)100 +√︁100(1 βˆ’ 0.6𝛼)) log(𝑛/𝛿) < 50 log(𝑛/𝛿)

Upon calculation, for 𝛼 > 5960 , we have the above expression. Therefore, using union bound, with probability 1 βˆ’π‘‚ (𝛿/𝑛), all records 𝑒 with

rank at least 59𝑛𝑑60 satisfy :

Count(𝑒, 𝑆𝑑 ) < Count(𝑣max, 𝑆𝑑 )So, all such values can be removed. Hence, the claim. β–‘

In the previous lemma, we argued that in every iteration, at least 1/60th fraction is removed and therefore in Θ(log𝑛) iterations, thealgorithm will terminate. In each iteration, we discard the sampled values 𝑆𝑑 to ensure that there is no dependency between the Count scores,and our guarantees hold. As we remove at most 𝑂 (𝑑 Β· log(𝑛/𝛿)) = 𝑂 (log2 (𝑛/𝛿)) sampled points, our final statement of the result is :

Lemma 8.11. Query complexity of Algorithm 5 is 𝑂 (𝑛 Β· log2 (𝑛/𝛿)) and 𝑒max satisfies rank(𝑒max,𝑉 ) ≀ 𝑂 (log2 (𝑛/𝛿)) with probability 1 βˆ’ 𝛿 .

Proof. From Lemma 8.10, we have with probability 1 βˆ’ 𝛿/𝑛, after iteration 𝑑 , at least 59𝑛𝑑60 records removed along with the 100 log(𝑛/𝛿)

records that are sampled. Therefore, we have :𝑛𝑑+1 ≀

𝑛𝑑

60βˆ’ 100 log(𝑛/𝛿)

After log(𝑛/𝛿) iterations, we have 𝑛𝑑+1 ≀ 1. As we have removed log60 𝑛 Β· 100 log(𝑛/𝛿) records that were sampled in total, these couldrecords with rank≀ 100 log2 (𝑛/𝛿). So, the rank of 𝑒max output is at most 100 log2 (𝑛/𝛿). In an iteration 𝑑 , the number of oracle queriescalculating Count values is 𝑂 (𝑛𝑑 Β· log(𝑛/𝛿)). In total, Algorithm 5 makes 𝑂 (𝑛 log2 (𝑛/𝛿)) oracle queries. Using union bound over log(𝑛/𝛿)iterations, we get a total failure probability of 𝛿 . β–‘

Theorem 8.12 (Theorem 3.7 restated). There is an algorithm that returns 𝑒max ∈ 𝑉 such that rank(𝑒max,𝑉 ) = 𝑂 (log2 (𝑛/𝛿)) withprobability 1 βˆ’ 𝛿 and requires 𝑂 (𝑛 log2 (𝑛/𝛿)) oracle queries.

Proof. The proof follows from Lemma 8.11 β–‘

9 FARTHEST AND NEAREST NEIGHBOR

Lemma 9.1 (Lemma 3.9 restated). Suppose max𝑣𝑖 βˆˆπ‘† 𝑑 (𝑒, 𝑣𝑖 ) ≀ 𝛼 and |𝑆 | β‰₯ 6 log(1/𝛿). Consider two records 𝑣𝑖 and 𝑣 𝑗 such that 𝑑 (𝑒, 𝑣𝑖 ) <𝑑 (𝑒, 𝑣 𝑗 ) βˆ’ 2𝛼 then FCount(𝑣𝑖 , 𝑣 𝑗 ) β‰₯ 0.3|𝑆 | with a probability of 1 βˆ’ 𝛿

Proof. Since 𝑑 (𝑒, 𝑣𝑖 ) < 𝑑 (𝑒, 𝑣 𝑗 ) βˆ’ 2𝛼 , for a point π‘₯ ∈ 𝑆 ,𝑑 (𝑣 𝑗 , π‘₯) β‰₯ 𝑑 (𝑒, 𝑣 𝑗 ) βˆ’ 𝑑 (𝑒, π‘₯)

> 𝑑 (𝑒, 𝑣𝑖 ) + 2𝛼 βˆ’ 𝑑 (𝑒, π‘₯)β‰₯ 𝑑 (𝑣𝑖 , π‘₯) βˆ’ 𝑑 (𝑒, π‘₯) + 2𝛼 βˆ’ 𝑑 (𝑒, π‘₯)β‰₯ 𝑑 (𝑣𝑖 , π‘₯) + 2𝛼 βˆ’ 2𝑑 (𝑒, π‘₯)β‰₯ 𝑑 (𝑣𝑖 , π‘₯)

So, 𝑂 (𝑣𝑖 , π‘₯, 𝑣 𝑗 , π‘₯) is No with a probability 𝑝 . As 𝑝 ≀ 0.4, we have :

E[FCount(𝑣𝑖 , 𝑣 𝑗 )] = (1 βˆ’ 𝑝) |𝑆 |Pr[FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ 0.3|𝑆 |] ≀ Pr[FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑆 |/2]

From Hoeffding’s inequality (with binary random variables), we have with a probability exp(βˆ’ |𝑆 | (1βˆ’π‘)2

2 ) ≀ 𝛿 (using |𝑆 | β‰₯ 6 log(1/𝛿), 𝑝 < 0.4): FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑆 |/2. Therefore, with probability at most 𝛿 , we have, FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ 0.3|𝑆 |. β–‘

For the sake of completeness, we restate the Count definition that is used in Algorithm Count-Max. For every oracle comparison, wereplace it with the pairwise comparison query described in Section 3.3. Let 𝑒 be a query point and 𝑆 denote a set of Θ(log(𝑛/𝛿)) points withina distance of 𝛼 from 𝑒. We maintain a Count score for a given point 𝑣𝑖 ∈ 𝑉 as :

Count(𝑒, 𝑣𝑖 , 𝑆,𝑉 ) =βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑣𝑖 }1{Pairwise-Comp(𝑒, 𝑣𝑖 , 𝑣 𝑗 , 𝑆) == No}

Lemma 9.2. Given a query vertex 𝑒 and a set 𝑆 with |𝑆 | = Ξ©(log(𝑛/𝛿)) such that maxπ‘£βˆˆπ‘† 𝑑 (𝑒, 𝑣) ≀ 𝛼 . Then the farthest identified using

Algorithm 13 (with PairwiseComp), denoted by 𝑒max is within 4𝛼 distance from the optimal farthest point, i.e., 𝑑 (𝑒,𝑒max) β‰₯ maxπ‘£βˆˆπ‘‰ 𝑑 (𝑒, 𝑣) βˆ’ 4𝛼with a probability of 1 βˆ’ 𝛿 . Further the query complexity is 𝑂 (𝑛2 log(𝑛/𝛿)).

How to Design Robust Algorithms using Noisy Comparison Oracle

Algorithm 13 Count-Max(V) : finds the farthest point by counting in 𝑉1: Input : A set of points 𝑉 , and query point 𝑒 and a set 𝑆 .2: Output : An approximate farthest point from 𝑒

3: for 𝑣𝑖 ∈ 𝑉 do

4: Calculate Count(𝑒, 𝑣𝑖 , 𝑆,𝑉 )5: 𝑒max ← arg maxπ‘£βˆˆπ‘†Count(𝑒, 𝑣𝑖 , 𝑆,𝑉 )6: return 𝑒max

Proof. Let 𝑣max = maxπ‘£βˆˆπ‘‰ 𝑑 (𝑒, 𝑣). Consider a value 𝑀 ∈ 𝑉 such that 𝑑 (𝑒,𝑀) < 𝑑 (𝑒, 𝑣max) βˆ’ 4𝛼 . We compare the Count val-ues for 𝑣max and 𝑀 given by, Count(𝑒, 𝑣max, 𝑆,𝑉 ) =

βˆ‘π‘£π‘— βˆˆπ‘‰ \{𝑣max } 1{Pairwise-Comp(𝑒, 𝑣max, 𝑣 𝑗 , 𝑆) == No} and Count(𝑒,𝑀, 𝑆,𝑉 ) =βˆ‘

𝑣𝑗 βˆˆπ‘‰ \{𝑀 } 1{Pairwise-Comp(𝑒,𝑀, 𝑣 𝑗 , 𝑆) == No}. We argue that 𝑀 can never be returned by Algorithm 13, i.e., Count(𝑒,𝑀, 𝑆,𝑉 ) <

Count(𝑒, 𝑣max, 𝑆,𝑉 ). Using Lemma 9.1 we have :

Count(𝑒, 𝑣max, 𝑆,𝑉 ) =βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑣max }1{Pairwise-Comp(𝑒, 𝑣max, 𝑣 𝑗 , 𝑆) == No}

β‰₯βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑣max }1{𝑑 (𝑒, 𝑣 𝑗 ) < 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼}

= 1{𝑑 (𝑒,𝑀) < 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼} +βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑣max,𝑀 }1{𝑑 (𝑒, 𝑣 𝑗 ) < 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼}

= 1 +βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑣max,𝑀 }1{𝑑 (𝑒, 𝑣 𝑗 ) < 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼}

Count(𝑒,𝑀, 𝑆,𝑉 ) =βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑀 }1{Pairwise-Comp(𝑒,𝑀, 𝑣 𝑗 , 𝑆) == No}

β‰€βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑀 }1{𝑑 (𝑒, 𝑣 𝑗 ) < 𝑑 (𝑒,𝑀) + 2𝛼}

β‰€βˆ‘οΈ

𝑣𝑗 βˆˆπ‘‰ \{𝑀,𝑣max }1{𝑑 (𝑒, 𝑣 𝑗 ) < 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼}

Combining the two, we have :Count(𝑒, 𝑣max, 𝑆,𝑉 ) > Count(𝑒,𝑀, 𝑆,𝑉 )

This shows that the Countof 𝑣max is strictly greater than the count of any point𝑀 when 𝑑 (𝑒,𝑀) < 𝑑 (𝑒, 𝑣max) βˆ’ 4𝛼 . Therefore, our algorithmwould have output 𝑣max instead of𝑀 . For calculating the Count for all points in 𝑉 , we make at most |𝑉 |2 Β· |𝑆 | oracle queries as we compareevery point with every other point using Algorithm 5. Finally, we output the point 𝑒max as the value with highest Count. From Lemma 9.1,when |𝑆 | = Ξ©(log(𝑛/𝛿)), the answer to any pairwise query is correct with a failure probability of 𝛿/𝑛2. As there are 𝑛2 pairwise comparisons,and each with failure probability of 𝛿/𝑛2, from union bound, we have that that the total failure probability is 𝛿 . Hence, the claim. β–‘

Algorithm 14 Tournament : finds the farthest point using a tournament tree1: Input : Set of values 𝑉 , Degree _, query point 𝑒 and a set 𝑆 .2: Output : An approximate farthest point from 𝑒

3: Construct a balanced _-ary tree T with |𝑉 | nodes as leaves.4: Let πœ‹π‘‰ be a random permutation of 𝑉 assigned to leaves of T5: for 𝑖 = 1 to log_ |𝑉 | do6: for internal node𝑀 at level log_ |𝑉 | βˆ’ 𝑖 do7: Letπ‘ˆ denote the children of𝑀 .8: Set the internal node𝑀 to Count-Max(𝑒, 𝑆,π‘ˆ )9: 𝑒max ← point at root of T10: return 𝑒max

Let the farthest point from query point 𝑒 among𝑉 be denoted by 𝑣max and the set of records for which the oracle answer can be incorrectis given by

𝐢 = {𝑀 | 𝑀 ∈ 𝑉 ,𝑑 (𝑒,𝑀) β‰₯ 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼}

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

Algorithm 15 Tournament-Partition1: Input : Set of values 𝑉 , number of partitions 𝑙 , query point 𝑒 and a set 𝑆 .2: Output : A set of farthest points from each partition3: Randomly partition 𝑉 into 𝑙 equal parts 𝑉1,𝑉2, Β· Β· ·𝑉𝑙4: for 𝑖 = 1 to 𝑙 do5: 𝑝𝑖 ← Tournament(𝑒, 𝑆,𝑉𝑖 , 2)6: 𝑇 ← 𝑇 βˆͺ {𝑝𝑖 }7: return 𝑇

Algorithm 16Max-Prob : Maximum with Probabilistic Noise1: Input : Set of values 𝑉 , number of iterations 𝑑 , query point 𝑒 and a set 𝑆 .2: Output : An approximate farthest point 𝑒max3: 𝑖 ← 1,𝑇 ← πœ™

4: Let 𝑉 denote a sample of sizeβˆšπ‘›π‘‘ selected uniformly at random (with replacement) from 𝑉 .

5: for 𝑖 ≀ 𝑑 do

6: 𝑇𝑖 ← Tournament-Partition(𝑒, 𝑆,𝑉 , 𝑙)7: 𝑇 ← 𝑇 βˆͺ𝑇𝑖8: 𝑒max ← Count-Max(𝑒, 𝑆,𝑉 βˆͺ𝑇 )9: return 𝑒max

Lemma 9.3. (1) If |𝐢 | >βˆšπ‘›/2, then there exists a value 𝑣 𝑗 ∈ 𝑉 satisfying 𝑑 (𝑒, 𝑣 𝑗 ) β‰₯ 𝑑 (𝑒, 𝑣max) βˆ’ 2𝛼 with a probability of 1 βˆ’ 𝛿/2.

(2) Suppose |𝐢 | β‰€βˆšπ‘›/2. Then, 𝑇 contains 𝑣max with a probability at least 1 βˆ’ 𝛿/2.

Proof. The proof is similar to Lemma 8.7. β–‘

Theorem 9.4 (Theorem 3.10 restated). Given a query vertex 𝑒 and a set 𝑆 with |𝑆 | = Ξ©(log(𝑛/𝛿)) such that maxπ‘£βˆˆπ‘† 𝑑 (𝑒, 𝑣) ≀ 𝛼 then

the farthest identified using Algorithm 4 (with PairwiseComp), denoted by 𝑒max is within 6𝛼 distance from the optimal farthest point, i.e.,

𝑑 (𝑒,𝑒max) β‰₯ maxπ‘£βˆˆπ‘‰ 𝑑 (𝑒, 𝑣) βˆ’ 6𝛼 with a probability of 1 βˆ’ 𝛿 . Further the query complexity is 𝑂 (𝑛 log3 (𝑛/𝛿)).

Proof. The proof is similar to Theorem 8.8. In Algorithm 16, we first identify an approximate maximum value using Sampling. If|𝐢 | β‰₯

βˆšπ‘›2 , then, from Lemma 9.3, we have that the value returned is a 2𝛼 additive approximation of the maximum value of 𝑉 . Otherwise,

from Lemma 9.3, 𝑇 contains 𝑣max with a probability 1 βˆ’ 𝛿/2. As we use Count-Max on the set 𝑉 βˆͺ𝑇 , we know that the value returned, i.e.,𝑒max is a 4𝛼 of the maximum among values from 𝑉 βˆͺ𝑇 . Therefore, 𝑑 (𝑒,𝑒max) β‰₯ 𝑑 (𝑒, 𝑣max) βˆ’ 6𝛼 . Using union bound over 𝑛 Β· 𝑑 comparisons,the total probability of failure is 𝛿 .

For query complexity, Algorithm 15 obtains a set 𝑉 ofβˆšπ‘›π‘‘ sample values. Along with the set 𝑇 obtained (where |𝑇 | = 𝑛𝑑

𝑙), we use

Count-Max on 𝑉 βˆͺ 𝑇 to output the maximum 𝑒max. This step requires 𝑂 ( |𝑉 βˆͺ 𝑇 |2 |𝑆 |) = 𝑂 ((βˆšπ‘›π‘‘ + 𝑛𝑑

𝑙)2 log(𝑛/𝛿)) oracle queries. In

an iteration 𝑖 , for obtaining 𝑇𝑖 , we make 𝑂 (βˆ‘π‘— |𝑉𝑗 | log(𝑛/𝛿)) = 𝑂 (𝑛 log(𝑛/𝛿)) oracle queries (Claim 8.2), and for 𝑑 iterations, we make𝑂 (𝑛𝑑 log(𝑛/𝛿)) queries. Using 𝑑 = 2 log(2𝑛/𝛿), 𝑙 =

βˆšπ‘›, in total, we make 𝑂 (𝑛𝑑 log(𝑛/𝛿) + (

βˆšπ‘›π‘‘ + 𝑛𝑑

𝑙)2 log(𝑛/𝛿)) = 𝑂 (𝑛 log3 (𝑛/𝛿)) oracle

queries. Hence, the theorem. β–‘

10 π‘˜-CENTER : ADVERSARIAL NOISE

Lemma 10.1. Suppose in an iteration 𝑑 of Greedy algorithm, centers are given by 𝑆𝑑 and we reassign points using Assign which is a

𝛽-approximation to the correct assignment. In iteration 𝑑 + 1, using this assignment, if we obtain an 𝛼-approximate farthest point using

Approx-Farthest, then, after π‘˜ iterations, Greedy algorithm obtains a 2𝛼𝛽2-approximation for the π‘˜-center objective.

Proof. Consider an optimum clusteringπΆβˆ— with centers 𝑒1, 𝑒2, .., π‘’π‘˜ respectively:πΆβˆ— (𝑒1),πΆβˆ— (𝑒2), Β· Β· Β· ,πΆβˆ— (π‘’π‘˜ ). Let the centers obtained byAlgorithm 6 be denoted by 𝑆 . If |𝑆 βˆ©πΆβˆ— (𝑒𝑖 ) | = 1 for all 𝑖 , then, for some point π‘₯ ∈ πΆβˆ— (𝑒𝑖 ) assigned to 𝑠 𝑗 ∈ 𝑆 by Algorithm Assign, we have

𝑑 (π‘₯, 𝑆 βˆ©πΆβˆ— (𝑒𝑖 )) ≀ 𝑑 (π‘₯,𝑒𝑖 ) + 𝑑 (𝑒𝑖 , 𝑆 βˆ©πΆβˆ— (𝑒𝑖 )) ≀ 2𝑂𝑃𝑇

=β‡’ 𝑑 (π‘₯, 𝑠 𝑗 ) ≀ 𝛽 minπ‘ π‘˜ βˆˆπ‘† 𝑑 (π‘₯, π‘ π‘˜ ) ≀ 𝛽 𝑑 (π‘₯, 𝑆 βˆ©πΆβˆ— (𝑒𝑖 )) ≀ 2𝛽𝑂𝑃𝑇Therefore, every point in 𝑉 is at a distance of at most 2𝛽𝑂𝑃𝑇 from a center assigned in 𝑆 .

Suppose for some 𝑗 we have |𝑆 ∩ πΆβˆ— (𝑒 𝑗 ) | β‰₯ 2. Let 𝑠1, 𝑠2 ∈ 𝑆 ∩ πΆβˆ— (𝑒 𝑗 ) and 𝑠2 appeared after 𝑠1 in iteration 𝑑 + 1. As 𝑠1 ∈ 𝑆𝑑 , wehave minπ‘€βˆˆπ‘†π‘‘ 𝑑 (𝑀, 𝑠2) ≀ 𝑑 (𝑠1, 𝑠2). In iteration 𝑑 , we know that the farthest point 𝑠2 is an 𝛼-approximation of the farthest point (say 𝑓𝑑 ).Moreover, suppose 𝑠2 assigned to cluster with center π‘ π‘˜ in iteration 𝑑 that is a 𝛽-approximation of it’s true center. Therefore,

1𝛼

minπ‘€βˆˆπ‘†π‘‘

𝑑 (𝑀, 𝑓𝑑 ) ≀ 𝑑 (π‘ π‘˜ , 𝑠2) ≀ 𝛽 minπ‘€βˆˆπ‘†π‘‘

𝑑 (𝑀, 𝑠2) ≀ 𝛽𝑑 (𝑠1, 𝑠2)

How to Design Robust Algorithms using Noisy Comparison Oracle

Because 𝑠1 and 𝑠2 are in the same optimum cluster, from triangle inequality we have 𝑑 (𝑠1, 𝑠2) ≀ 2𝑂𝑃𝑇 . Combining all the above we getminπ‘€βˆˆπ‘†π‘‘ 𝑑 (𝑀, 𝑓𝑑 ) ≀ 2𝛼𝛽𝑂𝑃𝑇 which means that farthest point of iteration 𝑑 is at a distance of 2𝛼𝛽𝑂𝑃𝑇 from 𝑆𝑑 . In the subsequent iterations,the distance of any point to the final set of centers, given by 𝑆 only gets smaller. Hence,

max𝑣

minπ‘€βˆˆπ‘†

𝑑 (𝑣,𝑀) ≀ max𝑣

minπ‘€βˆˆπ‘†π‘‘

𝑑 (𝑣,𝑀) = minπ‘€βˆˆπ‘†π‘‘

𝑑 (𝑓𝑑 ,𝑀) ≀ 2𝛼𝛽𝑂𝑃𝑇

However, when we output the final clusters and centers, the farthest point after π‘˜-iterations (say π‘“π‘˜ ) could be assigned to center 𝑣 𝑗 ∈ 𝑆 thatis a 𝛽-approximation of the distance to true center.

𝑑 (π‘“π‘˜ , 𝑣 𝑗 ) ≀ 𝛽 minπ‘€βˆˆπ‘† 𝑑 (π‘“π‘˜ ,𝑀) ≀ 2𝛼𝛽2 𝑂𝑃𝑇

Therefore, every point is assigned to a cluster with distance at most 2𝛼𝛽2 𝑂𝑃𝑇 . Hence the claim. β–‘

Lemma 10.2. Given a set 𝑆 of centers, Algorithm Assign assigns a point 𝑒 to a cluster 𝑠 𝑗 ∈ 𝑆 such that 𝑑 (𝑒, 𝑠 𝑗 ) ≀ (1 + `)2min𝑠𝑑 βˆˆπ‘† {𝑑 (𝑒, 𝑠𝑑 )}using 𝑂 (π‘›π‘˜) queries.

Proof. The proof is essentially the same as Lemma 8.3 and uses MCount instead of Count. β–‘

Lemma 10.3. Given a set of centers 𝑆 , Algorithm 4 identifies a point 𝑣 𝑗 with probability 1 βˆ’ 𝛿/π‘˜ , such that

min𝑠 𝑗 βˆˆπ‘†

𝑑 (𝑣 𝑗 , 𝑠 𝑗 ) β‰₯ max𝑣𝑑 βˆˆπ‘‰

min𝑠𝑑 βˆˆπ‘†

𝑑 (𝑣𝑑 , 𝑠𝑑 )(1 + `)5

Proof. Suppose 𝑣𝑑 is the farthest point assigned to center 𝑠𝑑 ∈ 𝑆 . Let 𝑣 𝑗 , assigned to 𝑠 𝑗 ∈ 𝑆 be the point returned by Algorithm 4. FromTheorem 8.8, we have :

𝑑 (𝑣 𝑗 , 𝑠 𝑗 ) β‰₯max𝑣𝑖 βˆˆπ‘‰ 𝑑 (𝑣𝑖 , 𝑠𝑖 )(1 + `)3

β‰₯ 𝑑 (𝑣𝑑 , 𝑠𝑑 )(1 + `)3

β‰₯min𝑠′𝑑 βˆˆπ‘† 𝑑 (𝑣𝑑 , 𝑠 ′𝑑 )(1 + `)3

Due to error in assignment, using Lemma 10.2

𝑑 (𝑣 𝑗 , 𝑠 𝑗 ) ≀ (1 + `)2 minπ‘ β€²π‘—βˆˆπ‘†

𝑑 (𝑣 𝑗 , 𝑠 ′𝑗 )

Combining the above equations we have

minπ‘ β€²π‘—βˆˆπ‘†

𝑑 (𝑣 𝑗 , 𝑠 ′𝑗 ) β‰₯min𝑠′𝑑 βˆˆπ‘† 𝑑 (𝑣𝑑 , 𝑠 ′𝑑 )(1 + `)5

For Approx-Farthest, we use 𝑙 =βˆšπ‘› and 𝑑 = log(2π‘˜/𝛿) and𝑉 =

βˆšπ‘›π‘‘ . So, following the proof in Theorem 3.6, we succeed with probability

1 βˆ’ 𝛿/π‘˜ . Hence, the lemma. β–‘

Lemma 10.4. Given a current set of centers 𝑆 ,

(1) Assign assigns a point 𝑒 to a cluster 𝐢 (𝑠𝑖 ) such that 𝑑 (𝑒, 𝑠𝑖 ) ≀ (1 + `)2min𝑠 𝑗 βˆˆπ‘† {𝑑 (𝑒, 𝑠 𝑗 )} using 𝑂 (π‘›π‘˜) oracle queries additionally.(2) Approx-Farthest identifies a point𝑀 in cluster𝐢 (𝑠𝑖 ) such that min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑀, 𝑠 𝑗 ) β‰₯ max𝑣𝑑 βˆˆπ‘‰ min𝑠𝑑 βˆˆπ‘† 𝑑 (𝑣𝑑 , 𝑠𝑑 )/(1 + `)5 with probability

1 βˆ’ π›Ώπ‘˜using 𝑂 (𝑛 log2 (π‘˜/𝛿)) oracle queries .

Proof. (1) From Lemma 10.2, we have the claim. We assign a point to a cluster based on the scores the cluster center received incomparison to other centers. Except for the newly created center, we have previously queried every center with every other center. Therefore,number of new oracle queries made for every point is 𝑂 (π‘˜); that gives us a total of 𝑂 (π‘›π‘˜) additional new queries used by Assign.

(2) From Lemma 10.3, we have that min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑀, 𝑠 𝑗 ) β‰₯ max𝑣𝑑 βˆˆπ‘‰ min𝑠𝑑 βˆˆπ‘†π‘‘ (𝑣𝑑 ,𝑠𝑑 )(1+`)5 with probability 1βˆ’π›Ώ/π‘˜ . As the total number of queries

made by Algorithm 4 is 𝑂 (𝑛𝑑 + ( 𝑛𝑑𝑙+βˆšπ‘›π‘‘)2). For Approx-Farthest, we use 𝑙 =

βˆšπ‘› and 𝑑 = log(2π‘˜/𝛿) and 𝑉 =

βˆšπ‘›π‘‘ , therefore, the query

complexity is 𝑂 (𝑛 log2 (π‘˜/𝛿)). β–‘

Theorem 10.5 (Theorem 4.2 restated). For ` < 118 , Algorithm 6 achieves a (2 +𝑂 (`))-approximation for the π‘˜-center objective using

𝑂 (π‘›π‘˜2 + π‘›π‘˜ Β· log2 (π‘˜/𝛿)) oracle queries with probability 1 βˆ’ 𝛿 .

Proof. From the above discussed claim and Lemma 10.4, we have that Algorithm 6 achieves a 2(1 + `)9 approximation for π‘˜-centerobjective. When ` < 1

18 , we can simplify the approximation factor to 2 + 18`, i.e., 2 +𝑂 (`). From Lemma 10.4, we have that in each iteration,we succeed with probability 1 βˆ’ 𝛿/π‘˜ . Using union bound, the failure probability is given by 𝛿 . For query complexity, as there are π‘˜ iterations,and in each iteration we use Assign and Approx-Farthest, using Lemma 10.4, we have the theorem. β–‘

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

11 π‘˜-CENTER : PROBABILISTIC NOISE

11.1 Sampling

Lemma 11.1. Consider the sample𝑉 βŠ† 𝑉 of points obtained by selecting each point with a probability450 log(𝑛/𝛿)

π‘š . Then, we have400𝑛 log(𝑛/𝛿)

π‘š ≀|𝑉 | ≀ 500𝑛 log(𝑛/𝛿)

π‘š and for every 𝑖 ∈ [π‘˜], |πΆβˆ— (𝑠𝑖 ) βˆ©π‘‰ | β‰₯ 400 log(𝑛/𝛿) with probability 1 βˆ’π‘‚ (𝛿) for sufficiently large 𝛾 > 0.

Proof. We include every point in𝑉 with a probability 450 log(𝑛/𝛿)π‘š where the size of the smallest cluster isπ‘š. Using Chernoff bound, with

probability 1 βˆ’π‘‚ (𝛿), we have :400𝑛 log(𝑛/𝛿)

π‘šβ‰€ |𝑉 | ≀ 500𝑛 log(𝑛/𝛿)

π‘š

Consider an optimal cluster πΆβˆ— (𝑣𝑖 ) with center 𝑣𝑖 . As every point is included with probability 450 log(𝑛/𝛿)π‘š :

E[|πΆβˆ— (𝑠𝑖 ) βˆ©π‘‰ |] = |πΆβˆ— (𝑠𝑖 ) | Β·450 log(𝑛/𝛿)

π‘šβ‰₯ 450 log(𝑛/𝛿)

Using Chernoff bound, with probability at least 1 βˆ’ 𝛿/𝑛, we have

|πΆβˆ— (𝑠𝑖 ) βˆ©π‘‰ | β‰₯ 400 log(𝑛/𝛿)

Using union bound for all the π‘˜ clusters, we have the lemma. β–‘

11.2 Assignment

ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) =βˆ‘οΈ

π‘₯ βˆˆπ‘… (𝑠𝑖 )1{O(𝑒, π‘₯,𝑒, 𝑠 𝑗 ) == Yes}

Lemma 11.2. Consider a point𝑒 and 𝑠 𝑗 β‰  𝑠𝑖 such that𝑑 (𝑒, 𝑠𝑖 ) ≀ 𝑑 (𝑒, 𝑠 𝑗 )βˆ’2OPT and |𝑅(𝑠𝑖 ) | β‰₯ 12 log(𝑛/𝛿), then, ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) β‰₯ 0.3|𝑅(𝑠𝑖 ) |with a probability of 1 βˆ’ 𝛿

𝑛2 .

Proof. Using triangle inequality, for any π‘₯ ∈ 𝑅(𝑠𝑖 )

𝑑 (𝑒, π‘₯) ≀ 𝑑 (𝑒, 𝑠𝑖 ) + 𝑑 (𝑠𝑖 , π‘₯) ≀ 𝑑 (𝑒, 𝑠 𝑗 ) βˆ’ 2OPT+𝑑 (𝑠𝑖 , π‘₯) ≀ 𝑑 (𝑒, 𝑠 𝑗 )

So, O(𝑒, π‘₯,𝑒, 𝑠 𝑗 ) is Yes with a probability at least 1 βˆ’ 𝑝 . We have:

E[ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 )] =βˆ‘οΈ

π‘₯ βˆˆπ‘… (𝑠𝑖 )E[1{O(𝑒, π‘₯,𝑒, 𝑠 𝑗 ) == Yes}] β‰₯ (1 βˆ’ 𝑝) |𝑅(𝑠𝑖 ) |

Using Hoeffding’s inequality, with a probability of exp(βˆ’|𝑅(𝑠𝑖 ) | (1 βˆ’ 𝑝)2/2) ≀ 𝛿𝑛2 (using 𝑝 ≀ 0.4), we have

ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑅(𝑠𝑖 ) |/2

We have Pr[ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) ≀ 0.3|𝑆 |] ≀ Pr[ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑆 |/2]. Therefore, with probability 𝛿𝑛2 , we have ACount(𝑒, 𝑠𝑖 , 𝑠 𝑗 ) ≀

0.3|𝑆 |. Hence, the lemma. β–‘

Lemma 11.3. Suppose 𝑒 ∈ πΆβˆ— (𝑠𝑖 ) and for some 𝑠 𝑗 ∈ 𝑆 , if 𝑑 (𝑠𝑖 , 𝑠 𝑗 ) β‰₯ 6OPT, then, Algorithm 8 assigns 𝑒 to center 𝑠𝑖 with probability 1 βˆ’ 𝛿𝑛2 .

Proof. As 𝑒 ∈ πΆβˆ— (𝑠𝑖 ), we have 𝑑 (𝑒, 𝑠𝑖 ) ≀ 2OPT. Therefore,

𝑑 (𝑠 𝑗 , 𝑒) βˆ’ 𝑑 (𝑠𝑖 , 𝑒) β‰₯ 𝑑 (𝑠𝑖 , 𝑠 𝑗 ) βˆ’ 2𝑑 (𝑠𝑖 , 𝑒) β‰₯ 2OPT𝑑 (𝑠 𝑗 , 𝑒) β‰₯ 𝑑 (𝑠𝑖 , 𝑒) + 2OPT

From Lemma 11.2, we have that if 𝑑 (𝑒, 𝑠𝑖 ) ≀ 𝑑 (𝑒, 𝑠 𝑗 ) βˆ’ 2OPT, then, we will assign 𝑒 to 𝑠𝑖 with probability 1 βˆ’ 𝛿𝑛2 . β–‘

Lemma 11.4. Given a set of centers 𝑆 , every 𝑒 ∈ 𝑉 is assigned to a cluster 𝑠𝑖 such that 𝑑 (𝑒, 𝑠𝑖 ) ≀ min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑒, 𝑠 𝑗 ) + 2OPT with a probability

of 1 βˆ’ 1/𝑛2.

Proof. From Lemma 11.2, we have that a point 𝑒 is assigned to 𝑠𝑙 from π‘ π‘š if 𝑑 (𝑒, 𝑠𝑙 ) ≀ 𝑑 (𝑒, π‘ π‘š) βˆ’ 2OPT. If 𝑠𝑖 is the final assigned centerof 𝑒, then, for every 𝑠 𝑗 , it must be true that 𝑑 (𝑒, 𝑠 𝑗 ) β‰₯ 𝑑 (𝑒, 𝑠𝑖 ) βˆ’ 2OPT, which implies 𝑑 (𝑒, 𝑠𝑖 ) ≀ min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑒, 𝑠 𝑗 ) + 2OPT. Using union boundover at most 𝑛 points, we have with a probability of 1 βˆ’ 𝛿

𝑛 , every point 𝑒 is assigned as claimed. β–‘

How to Design Robust Algorithms using Noisy Comparison Oracle

11.3 Core Calculation

Consider a cluster 𝐢 (𝑠𝑖 ) with center 𝑠𝑖 . Let π‘†π‘π‘Ž denote the number of points in the set |{π‘₯ : π‘Ž ≀ 𝑑 (π‘₯, 𝑠𝑖 ) < 𝑏}|.

Count(𝑒) =βˆ‘οΈ

π‘₯ ∈𝐢 (𝑠𝑖 )1{O(𝑠𝑖 , π‘₯, 𝑠𝑖 , 𝑒) == No}

Lemma 11.5. Consider any two points 𝑒1, 𝑒2 ∈ 𝐢 (𝑠𝑖 ) such that 𝑑 (𝑒1, 𝑠𝑖 ) ≀ 𝑑 (𝑒2, 𝑠𝑖 ), then E[Count(𝑒1)] βˆ’ E[Count(𝑒2)] = (1 βˆ’ 2𝑝)𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 )

Proof. For a point 𝑒 ∈ 𝐢 (𝑠𝑖 )

E[Count(𝑒)] = E

βˆ‘οΈπ‘₯ ∈𝐢 (𝑠𝑖 )

1{O(𝑠𝑖 , π‘₯, 𝑠𝑖 , 𝑒) == No}

= 𝑆𝑑 (𝑒,𝑠𝑖 )0 𝑝 + π‘†βˆž

𝑑 (𝑒,𝑠𝑖 ) (1 βˆ’ 𝑝)

E[Count(𝑒1)] βˆ’ E[Count(𝑒2)] =(𝑆𝑑 (𝑒1,𝑠𝑖 )0 𝑝 + 𝑆𝑑 (𝑒2,𝑠𝑖 )

𝑑 (𝑒1,𝑠𝑖 ) (1 βˆ’ 𝑝) + π‘†βˆžπ‘‘ (𝑒2,𝑠𝑖 ) (1 βˆ’ 𝑝)

)βˆ’(𝑆𝑑 (𝑒1,𝑠𝑖 )0 𝑝 + +𝑆𝑑 (𝑒2,𝑠𝑖 )

𝑑 (𝑒1,𝑠𝑖 )𝑝 + π‘†βˆžπ‘‘ (𝑒2,𝑠𝑖 ) (1 βˆ’ 𝑝)

)= (1 βˆ’ 2𝑝)𝑆𝑑 (𝑒2,𝑠𝑖 )

𝑑 (𝑒1,𝑠𝑖 )β–‘

Lemma 11.6. Consider any two points𝑒1, 𝑒2 ∈ 𝐢 (𝑠𝑖 ) such that𝑑 (𝑒1, 𝑠𝑖 ) ≀ 𝑑 (𝑒2, 𝑠𝑖 ) and |𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 ) | β‰₯

√︁100|𝐢 (𝑠𝑖 ) | log(𝑛/𝛿). Then, Count(𝑒1) >

Count(𝑒2) with probability 1 βˆ’ 𝛿/𝑛2.

Proof. Suppose 𝑒1, 𝑒2 ∈ 𝐢 (𝑠𝑖 ). We have that Count(𝑒1) and Count(𝑒2) is a sum of |𝐢 (𝑠𝑖 ) | binary random variables.Using Hoeffding’s inequality, we have with probability exp(βˆ’π›½2/2|𝐢 (𝑠𝑖 ) |) that

Count(𝑒1) ≀ E[Count(𝑒1)] βˆ’π›½

2

Count(𝑒2) > E[Count(𝑒2)] +𝛽

2Using union bound, with probability at least 1 βˆ’ 2 exp(βˆ’π›½2/2|𝐢 (𝑠𝑖 ) |), we can conclude that

Count(𝑒1) βˆ’ Count(𝑒2) > E[Count(𝑒1) βˆ’ Count(𝑒2)] βˆ’ 𝛽 > (1 βˆ’ 2𝑝)𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 ) βˆ’ 𝛽

Choosing 𝛽 = (1 βˆ’ 2𝑝)𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 ) , we have Count(𝑒1) > Count(𝑒2) with a probability (for constant 𝑝 ≀ 0.4)

1 βˆ’ 2 exp(βˆ’(1 βˆ’ 2𝑝)2(𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 )

)2/2|𝐢 (𝑠𝑖 ) |) β‰₯ 1 βˆ’ 2 exp(βˆ’0.02

(𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 )

)2/|𝐢 (𝑠𝑖 ) |).

Further, simplifying using 𝑆𝑑 (𝑒2,𝑠𝑖 )𝑑 (𝑒1,𝑠𝑖 ) β‰₯

√︁100|𝐢 (𝑠𝑖 ) | log(𝑛/𝛿), we get probability of failure is 2 exp(βˆ’2 log(𝑛/𝛿)) = 𝑂 (𝛿/𝑛2) β–‘

Lemma 11.7. If |𝐢 (𝑠𝑖 ) | β‰₯ 400 log(𝑛/𝛿), then, |𝑅(𝑠𝑖 ) | β‰₯ 200 log(𝑛/𝛿) with probability 1 βˆ’ |𝐢 (𝑠𝑖 ) |2𝛿/𝑛2.

Proof. From Lemma 11.6, we have that if there are points 𝑒1, 𝑒2 with√︁100|𝐢 (𝑠𝑖 ) | log(𝑛/𝛿) many points between them, then, we can

identify the closer one correctly. When |𝐢 (𝑠𝑖 ) | β‰₯ 400 log(𝑛/𝛿), we have√︁100|𝐢 (𝑠𝑖 ) | log(𝑛/𝛿) β‰₯ 200 log(𝑛/𝛿) points between every point

and the point with the rank 200 log(𝑛/𝛿). Therefore, |𝑅(𝑠𝑖 ) | β‰₯ 200 log(𝑛/𝛿). Using union bound over all pairs of points in the cluster, we getthe claim. β–‘

Lemma 11.8. If π‘₯ ∈ πΆβˆ— (𝑠𝑖 ), then, π‘₯ ∈ 𝐢 (𝑠𝑖 ) or π‘₯ is assigned to a cluster 𝑠 𝑗 such that 𝑑 (π‘₯, 𝑠 𝑗 ) ≀ 8OPT.

Proof. If π‘₯ ∈ πΆβˆ— (𝑠𝑖 ), we argue that it will be assigned to 𝐢 (𝑠𝑖 ). For the sake of contradiction, suppose π‘₯ is assigned to a cluster 𝐢 (𝑠 𝑗 ) forsome 𝑠 𝑗 ∈ 𝑆 . We have 𝑑 (π‘₯, 𝑠𝑖 ) ≀ 2OPT and let 𝑑 (𝑠𝑖 , 𝑠 𝑗 ) β‰₯ 6OPT

𝑑 (𝑠𝑖 , 𝑠 𝑗 ) ≀ 𝑑 (𝑠 𝑗 , π‘₯) + 𝑑 (𝑠𝑖 , π‘₯)

𝑑 (𝑠 𝑗 , π‘₯) β‰₯ 4OPTHowever, we know that 𝑑 (𝑠 𝑗 , π‘₯) ≀ 𝑑 (𝑠𝑖 , π‘₯) + 2OPT ≀ 4OPT from Lemma 11.2. We have a contradiction. Therefore, π‘₯ is assigned to 𝑠𝑖 . If𝑑 (𝑠𝑖 , 𝑠 𝑗 ) ≀ 6OPT, we have 𝑑 (π‘₯, 𝑠 𝑗 ) ≀ 𝑑 (π‘₯, 𝑠𝑖 ) + 2OPT ≀ 8OPT. Hence, the lemma. β–‘

Raghavendra Addanki, Sainyam Galhotra, and Barna Saha

11.4 Farthest point computation

Let 𝑅(𝑠𝑖 ) represent the core of the cluster 𝐢 (𝑠𝑖 ) and contains Θ(log(𝑛/𝛿)) points. We define FCount for comparing two points 𝑣𝑖 , 𝑣 𝑗 fromtheir centers 𝑠𝑖 , 𝑠 𝑗 respectively. If 𝑠𝑖 β‰  𝑠 𝑗 , we let :

FCount(𝑣𝑖 , 𝑣 𝑗 ) =βˆ‘οΈ

π‘₯ βˆˆπ‘… (𝑠𝑖 ),π‘¦βˆˆπ‘… (𝑠 𝑗 )

1{O(𝑣𝑖 , π‘₯, 𝑣 𝑗 , 𝑦) == Yes}

Otherwise, we let FCount(𝑣𝑖 , 𝑣 𝑗 ) =βˆ‘π‘₯ βˆˆπ‘… (𝑠𝑖 ) 1{O(𝑣𝑖 , π‘₯, 𝑣 𝑗 , π‘₯) == Yes}. First, we observe that each of the summation is over |𝑅(𝑠𝑖 ) | many

terms, because |𝑅(𝑠𝑖 ) | =√︁|𝑅(𝑠𝑖 ) |.

Lemma 11.9. Consider two records 𝑣𝑖 , 𝑣 𝑗 in different clusters 𝐢 (𝑠𝑖 ), 𝐢 (𝑠 𝑗 ) respectively such that 𝑑 (𝑠𝑖 , 𝑣𝑖 ) < 𝑑 (𝑠 𝑗 , 𝑣 𝑗 ) βˆ’ 4OPT then

FCount(𝑣𝑖 , 𝑣 𝑗 ) β‰₯ 0.3|𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) | with a probability of 1 βˆ’ 𝛿𝑛2 .

Proof. We know max𝑣𝑖 βˆˆπ‘… (𝑠𝑖 ) 𝑑 (𝑒, 𝑣𝑖 ) ≀ 2OPT and max

𝑣𝑗 βˆˆπ‘… (𝑠 𝑗 ) 𝑑 (𝑣 𝑗 , 𝑠 𝑗 ) ≀ 2OPT.For a point π‘₯ ∈ 𝑅(𝑠𝑖 ), 𝑦 ∈ 𝑅(𝑠 𝑗 )

𝑑 (𝑣 𝑗 , 𝑦) β‰₯ 𝑑 (𝑠 𝑗 , 𝑣 𝑗 ) βˆ’ 𝑑 (𝑠 𝑗 , 𝑦)> 𝑑 (𝑣𝑖 , 𝑠𝑖 ) + 4OPTβˆ’π‘‘ (𝑠 𝑗 , 𝑦)> 𝑑 (𝑣𝑖 , π‘₯) βˆ’ 𝑑 (π‘₯, 𝑠𝑖 ) + 4OPTβˆ’π‘‘ (𝑠 𝑗 , 𝑦)> 𝑑 (𝑣𝑖 , π‘₯)

So, 𝑂 (𝑣𝑖 , π‘₯, 𝑣 𝑗 , 𝑦) is No with a probability 𝑝 . As 𝑝 ≀ 0.4, we have :

E[FCount(𝑣𝑖 , 𝑣 𝑗 )] = (1 βˆ’ 𝑝) |𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) |

Pr[FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ 0.3|𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) |] ≀ Pr[FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) |/2]

From Hoeffding’s inequality (with binary random variables), we have with a probability exp(βˆ’ |𝑅 (𝑠𝑖 ) | |𝑅 (𝑠 𝑗 ) | (1βˆ’π‘)2

2 ) ≀ 𝛿𝑛2 (using

|𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) | β‰₯ 12 log(𝑛/𝛿), 𝑝 < 0.4) : FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ (1 βˆ’ 𝑝) |𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) |/2. Therefore, with probability at most 𝛿/𝑛2, we have,FCount(𝑣𝑖 , 𝑣 𝑗 ) ≀ 0.3|𝑅(𝑠𝑖 ) | |𝑅(𝑠 𝑗 ) |.

β–‘

In order to calculate the farthest point, we use the ideas discussed in Section 3 to identify the point that has the maximum distance fromits assigned center. As noted in Section 3.3, our approximation guarantees dependend on the maximum distance of points in the core fromthe center. In the next lemma, we show that assuming a maximum distance of a point in the core (See Lemma 11.8), we can obtain a goodapproximation for the farthest point.

Lemma 11.10. Let max𝑠 𝑗 βˆˆπ‘†,π‘’βˆˆπ‘… (𝑠 𝑗 ) 𝑑 (𝑒, 𝑠 𝑗 ) ≀ 𝛼 . In every iteration, if the farthest point is at a distance more than (6𝛼 + 3OPT), then,Approx-Farthest outputs a (6𝛼/OPT+3)-approximation. Otherwise, the point output is at most (6𝛼 + 3OPT) away.

Proof. The farthest point output Approx-Farthest is a 6𝛼 additive approximation. However, the assignment of points to the clusteralso introduces another additive approximation of 2OPT, resulting in a total 6𝛼 + 2OPT approximation. Suppose in the current iteration,the distance of the farthest point is 𝛽 OPT, then the point output by Approx-Farthest is at least 𝛽 OPTβˆ’(6𝛼 + 2OPT) away. So, theapproximation ratio is 𝛽

π›½βˆ’(6𝛼+2OPT) . If 𝛽 OPT β‰₯ 6𝛼 + 3OPT, we have 𝛽 OPT𝛽 OPTβˆ’(6𝛼+2OPT) ≀ 𝛽 . As we are trying to minimize the approximation

ratio, we set 𝛽 OPT = 6𝛼 + 3OPT and get the claimed guarantee. β–‘

11.5 Final Guarantees

Throughout this section, we assume thatπ‘š = Ξ©(log3 (𝑛/𝛿)

𝛿

)for a given failure probability 𝛿 > 0.

Lemma 11.11. Given a current set of centers 𝑆 , and max𝑣𝑗 βˆˆπ‘†,π‘’βˆˆπ‘… (𝑣𝑗 ) 𝑑 (𝑒, 𝑣 𝑗 ) ≀ 𝛼 , we have :

(1) Every point 𝑒 is assigned to a cluster 𝐢 (𝑠𝑖 ) such that 𝑑 (𝑒, 𝑠𝑖 ) ≀ min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑒, 𝑠 𝑗 ) + 2OPT using 𝑂 (π‘›π‘˜ log(𝑛/𝛿)) oracle queries withprobability 1 βˆ’π‘‚ (𝛿).

(2) Approx-Farthest identifies a point 𝑀 in cluster 𝐢 (𝑠𝑖 ) such that min𝑣𝑗 βˆˆπ‘† 𝑑 (𝑀, 𝑣 𝑗 ) β‰₯ max𝑣𝑗 βˆˆπ‘‰ min𝑠 𝑗 βˆˆπ‘† 𝑑 (𝑣 𝑗 , 𝑠 𝑗 )/(6𝛼/OPT+3) withprobability 1 βˆ’π‘‚ (𝛿/π‘˜) using 𝑂 ( |𝑉 | log3 (𝑛/𝛿)) oracle queries.

Proof. (1) First, we argue that cores are calculated correctly. From Lemma 11.3, we have that a point 𝑒 ∈ πΆβˆ— (𝑠𝑖 ) is assigned to thecenter correctly 𝑠𝑖 . Therefore, all the points from 𝑉 βˆ©πΆβˆ— (𝑆𝑖 ) move to 𝐢 (𝑆𝑖 ). As the size of |𝐢 (𝑆𝑖 ) | β‰₯ |𝑉 βˆ©πΆβˆ— (𝑆𝑖 ) | β‰₯ 400 log(𝑛/𝛿), we have|𝑅(𝑠𝑖 ) | β‰₯ 200 log(𝑛/𝛿) with a probability 1 βˆ’ |𝐢 (𝑠𝑖 ) |2𝛿/𝑛2(From Lemma 11.6). Using union bound, we have that all the cores are calculated

How to Design Robust Algorithms using Noisy Comparison Oracle

correctly with a failure probability ofβˆ‘π‘– |𝐢 (𝑠𝑖 ) |2/𝑛2 = 𝛿 .

For every point, we compare the distance with every cluster center by maintaining a center that is the current closest. From Lemma 11.2,we have that the query will fail with a probability of 𝛿/𝑛2. Using union bound, we have that the failure probability is 𝑂 (π‘˜π‘›π›Ώ/𝑛2) = 𝛿 . FromLemma 11.2, we have the approximation guarantee.

(2) From Lemma 11.10, we have our claim regarding the approximation guarantees. For Approx-Farthest, we use the parameters

𝑑 = 2 log(2π‘˜/𝛿), 𝑙 =βˆšοΈƒ|𝑉 |. As we make 𝑂 ( |𝑉 | log2 (π‘˜/𝛿)) cluster comparisons using Algorithm ClusterComp (for Approx-Farthest), we

have that the total number of oracle queries is 𝑂 ( |𝑉 | log(𝑛/𝛿) log2 (π‘˜/𝛿)) = 𝑂 ( |𝑉 | log3 (𝑛/𝛿)). Using union bound, we have that the failureprobability is 𝑂 (𝛿/π‘˜ + |𝑉 | log2 (π‘˜/𝛿)/𝑛2) = 𝑂 (𝛿/π‘˜). β–‘

Theorem 11.12. [Theorem 4.4 restated] Given 𝑝 ≀ 0.4, a failure probability 𝛿 , and π‘š = Ξ©(log3 (𝑛/𝛿)

𝛿

). Then, Algorithm 7 achieves a

𝑂 (1)-approximation for the π‘˜-center objective using 𝑂 (π‘›π‘˜ log(𝑛/𝛿) + 𝑛2

π‘š2 π‘˜ log2 (𝑛/𝛿)) oracle queries with probability 1 βˆ’π‘‚ (𝛿).

Proof. Using similar proof as Lemma 10.1, we have that the approximation ratio of Algorithm 7 is 4(6𝛼/OPT+3) + 2. Using 𝛼 = 8OPTfrom Lemma 11.8, we have that the approximation factor is 206. For the first stage, from Lemma 11.11, we have that for all the π‘˜ iterations,the number of oracle queries is 𝑂 ( |𝑉 |π‘˜ log3 (𝑛/𝛿)). Using union bound over π‘˜ iterations, success probability is 1 βˆ’π‘‚ (𝛿). For the calculationof core, the query complexity is 𝑂 ( |𝑉 |2π‘˜). For assignment, the query complexity is 𝑂 (π‘›π‘˜ log(𝑛/𝛿)). Therefore, total query complexity is𝑂 (π‘›π‘˜ log(𝑛/𝛿) + 𝑛

π‘šπ‘˜ log4 (𝑛/𝛿) + 𝑛2

π‘š2 π‘˜ log2 (𝑛/𝛿)) = 𝑂 (π‘›π‘˜ log(𝑛/𝛿) + 𝑛2

π‘š2 π‘˜ log2 (𝑛/𝛿)). β–‘

12 HIERARCHICAL CLUSTERING

Lemma 12.1 (Lemma 5.1 restated). Given a collection of clusters C = {𝐢1, . . . ,πΆπ‘Ÿ }, our algorithm to calculate the closest pair (using

Algorithm 4) identifies 𝐢1 and 𝐢2 to merge according to single linkage objective if 𝑑𝑆𝐿 (𝐢2,𝐢2) ≀ (1 + `)3min𝐢𝑖 ,𝐢 𝑗 ∈C 𝑑 (𝐢𝑖 ,𝐢 𝑗 ) with 1 βˆ’ 𝛿probability and requires 𝑂 (π‘Ÿ2 log2 (𝑛/𝛿)) queries.

Proof. In each iteration, our algorithm considers a list of(π‘Ÿ2)distance values and calculates the closest using Algorithm 4. The claim

follows from the proof of Theorem 3.6 β–‘

Using the same analysis, we get the following result for complete linkage.

Lemma 12.2. Given a collection of clusters C = {𝐢1, . . . ,πΆπ‘Ÿ }, our algorithm to calculate the closest pair (using Algorithm 4) identifies 𝐢1and 𝐢2 to merge according to complete linkage objective if 𝑑𝑆𝐿 (𝐢2,𝐢2) ≀ (1 + `)3min𝐢𝑖 ,𝐢 𝑗 ∈C 𝑑 (𝐢𝑖 ,𝐢 𝑗 ) with 1 βˆ’ 𝛿 probability and requires

𝑂 (π‘Ÿ2 log2 (𝑛/𝛿)) queries.

Theorem 12.3 (Theorem 5.2 restated). In any iteration, suppose the distance between a cluster 𝐢 𝑗 ∈ C and its identified nearest neighbor

𝐢 𝑗 is 𝛼-approximation of its distance from the optimal nearest neighbor, then the distance between pair of clusters merged by Algorithm 11 is

𝛼 (1 + `)3 approximation of the optimal distance between the closest pair of clusters in C with a probability of 1 βˆ’ 𝛿 using 𝑂 (𝑛 log2 (𝑛/𝛿)) oraclequeries.

Proof. Algorithm 11 iterates over the list of pairs (𝐢𝑖 ,𝐢𝑖 ),βˆ€πΆπ‘– ∈ C and identifies the closest pair using Algorithm 4. The claim followsfrom the proof of Theorem 3.6 β–‘

top related