research statement - duke universitykulkarni/researchstatement.pdf · research statement janardhan...

9

Click here to load reader

Upload: phamnhan

Post on 06-Jul-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

Research Statement

Janardhan Kulkarni ( [email protected]),Theory group, Microsoft Research, Redmond.

My main research interest is in the design of efficient algorithms for problems in discrete optimization. Anotherimportant aspect of my research is bridging the gap between theory and practice. My focus over the past five yearshas been on developing algorithms for scheduling and resource allocation problems that arise in large data centers andbig data applications. Broadly speaking, my work can be placed in one of the following categories:

• Algorithms and Uncertainty: The rapid increase in data and cloud based applications has resulted in growth ofdata centers at an unprecedented pace. Some of the most important problems in the context of data centers areresource allocation and scheduling problems. The main technical challenge in designing efficient algorithms fordata centers is handling the uncertainty that arises as the input is online: jobs arrive one by one, and online al-gorithms have to make irrevocable decisions based only on the input seen so far. My work in online schedulingdeveloped new techniques based on duality theory to solve problems concerning scheduling unrelated (hetero-geneous) machines, which were open for more than fifteen years (FOCS’14 [24]). Further, I developed thetheory of polytope scheduling problem (PSP) (STOC’14 [22], FOCS’15 [23], APPROX’16 [21] ) to capturethe multidimensional nature of scheduling problems that arise in data centers. My work established interestingconnections between multidimensional scheduling and competitive equilibrium concepts from economics.

• Approximation Algorithms for Fundamental Problems: Scheduling and resource allocation problems arecentral topics of research in the theory of algorithm design. The origins of many foundational algorithm de-sign principles such as LP rounding methods and applications of the Lovasz Local Lemma, that are now partof standard books on the topic, can be traced to the study of these problems. Despite more than two decadesof research, several problems concerning flow-time (delay or response time) optimization were open until re-cently. I designed the first approximation algorithms for minimizing average flow-time, maximum flow-time,and average tardiness on unrelated machines (STOC’15 [6], submitted[14]). My results on flow-time resolveda long standing conjecture.

• Optimization Under Strategic Settings: Since clusters in data centers are shared resources, selfish/strategicbehavior by users is common. An important research direction in algorithmic game theory is to understandthe effect of selfish behavior on the quality of outcomes, quantified in terms of price of anarchy (PoA). I havestudied scheduling and routing problems in selfish settings and designed algorithms with small PoA. My workin this area introduced linear and convex programming duality based techniques for bounding PoA (SODA’15[27], ITCS’14 [8], ICALP’14 [7]), which seem as powerful as the smoothness framework [32].

• Applied Algorithms: I spend some portion of my time bridging the gap between theoretically proven algo-rithms and algorithms that need to be implemented in real systems. I have designed practical algorithms forefficient routing in reconfigurable data centers (SIGCOMM’16 [16]), multidimensional scheduling in presenceof dependencies (DAGs) (OSDI’16 [19]), and scheduling periodic jobs (OSDI’16 [25]). I introduced the notionof temporal relaxations of fairness as a new way of looking at fairness in cluster scheduling (submitted [18]).Some of the algorithms developed in the these projects have also been open-sourced and are now part of Yarn,the resource manager of Hadoop [1]. Hadoop is the most popular implementation of MapReduce framework.

The study of scheduling and resource allocation problems has rich connections to approximation and onlinealgorithms, economics, game theory, and queueing theory. Naturally, my work has been influenced by various ideasfrom these different areas: Our resolution of a classical online scheduling problem crucially uses the idea of Nash

1

Page 2: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

equilibrium in the design and analysis of the algorithm. Our work in game theory shows that LP duality theory,a fundamental technique in the area of approximation and online algorithms, is equally powerful in the price ofanarchy analysis. Our work on multidimensional scheduling (PSP) shows the effectiveness of approaching schedulingproblems through the lens of market equilibrium concepts. This interplay of ideas from different areas lies at the heartof most of the algorithms I have designed.

Figure 1: Many algorithms developed in my work use concepts from various areas both in the design and analysis.The figure illustrates various connections my research has established.

In the following paragraphs, I will give a brief overview of the primary contributions of my current research. Iwill start with classical optimization problems, then discuss about my work in more applied areas, and conclude withfuture plans.

1 Classical Problems

I spend a large fraction of time working on fundamental problems that have remained open for a long time. I believethat making progress on such problems inevitably leads to the development of ideas and techniques that will havesignificant impact on algorithm design.

First Approximation Algorithms For Flow-Time The unrelated machines model, introduced in the seminal workof Lenstra, Shmoys, and Tardos [28], is a general abstraction of the heterogeneity of resources found in data cen-ters/modern processor architectures. The main assumption in the unrelated machines model is that jobs exhibit dif-

2

Page 3: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

ferent behavior when scheduled on different machines. The problem of minimizing flow-time or response time onunrelated machines is one of the central questions in the theory of approximation algorithms. The flow-time measuresthe amount of time a job waits in the system before it completes, and is the most widely used quality-of-service metricin practice. Despite more than twenty five years of research, the problem of scheduling jobs to minimize the averageflow-time on unrelated machines remained a fundamental open problem in approximation algorithms. Similarly, theproblem of minimizing the maximum flow-time, a natural generalization of the load balancing problem [14], also hadno non-trivial approximation algorithms.

In my work with Nikhil Bansal (STOC’15 [6]), I designed the first polylogarithmic approximation algorithms forminimizing average flow-time and minimizing maximum flow-time, and thus, resolving a long standing conjecture.Our algorithms and analyses introduced an iterated rounding technique for flow-time optimization, and gave a newframework to analyze flow-time related objective functions.

First Approximation Algorithm For Tardiness While minimizing the average flow-time is a natural measureof QoS, in cluster scheduling settings it is also required to finish certain high priority jobs within their deadlines.An objective function that incorporates both these considerations is the weighted tardiness: Tardiness of a job withdeadline dj is defined as the difference between its completion time and its deadline or zero if the job completesbefore its deadline. A desirable feature of this objective function is that it simultaneously captures several extensivelystudied objective functions such as makespan, completion time, and deadline scheduling.

Although the tardiness objective and its various special cases have been studied for more than two decades, noresults were known for unrelated machines. In a recent work with Nikhil Devanur (submitted to STOC [14]), wedesigned the first bicriteria constant factor approximation algorithm for the problem. We obtain our result by asubstantial generalization of the landmark result of Lenstra, Shmoys, and Tardos [28].

First Duality Based Framework For Non-Clairvoyant Algorithms In many practical applications, it is difficultto get accurate knowledge of the processing requirements of jobs. Moreover, scheduling problems that arise in datacenters are also online in nature, since we get to know the properties of a job only upon its arrival. A major problemin online scheduling that had remained open for a long time was whether there exist non-clairvoyant and onlinealgorithms – algorithms that make scheduling decisions without knowing the job lengths and future job arrivals – forthe average flow-time objective on unrelated machines.

In my work with Sungjin Im, Kamesh Munagala, and Kirk Pruhs (FOCS’14 [24]), I resolved this question andgave an online algorithm with the optimal competitive ratio in the resource augmentation model 1. Our work in-troduced a unified LP-duality based approach to analyze non-clairvoyant scheduling problems that simplified andgeneralized all the previous works on the topic. Interestingly, our framework also uses ideas from game theory; as wewill see later, the duality theory also lies at the heart of the analysis of games.

2 Good Theory is Inspired By Practice

I believe that good theory is inspired by practice and vice versa. Hence, a major part of my research is devoted tounderstanding algorithmic problems that are crucial in practice. Now I would like to demonstrate this aspect of myresearch by giving two examples. Most of the resulting papers appeared in flagship conferences such as STOC, FOCS,SIGCOMM, and OSDI signifying both their theoretical and practical relevance.

2.1 Cluster Scheduling

Theory Why has cluster scheduling become an important topic of research in recent years? The reasons are twofold.With the explosive growth of data centers, it has become evident to most system designers that even a modest increasein the efficiency of clusters by a careful scheduling of jobs can save millions of dollars. The technical reason for thisrenewed interest is that scheduling in data centers presents algorithmic challenges that are not completely capturedby the existing models. For example, the unrelated machines model, arguably the most general model of machinescheduling, does not capture the multidimensional nature of the problems that arise in data centers.

1In resource augmentation model the online algorithm is given some arbitrary small extra speed

3

Page 4: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

The polytope scheduling problem (PSP), which we introduce in (STOC’14 [22]), models the scheduling problemsthat arise in the multidimensional settings. In PSP, a scheduling decision at each time step entails partitioning a set ofresources, such as CPU, memory, bandwidth, etc., among a set of jobs. Given a partition of the resources to the jobs,each job executes at a rate which is some function of its allocation. By choosing the rate functions appropriately,PSP can model many problems such as unrelated machine scheduling, switch scheduling, broadcast scheduling, androuting over networks.

Our key contribution in this line of work is to view multidimensional scheduling from an economic angle. Ouralgorithm for PSP is a widely studied algorithm in economics and game theory called Proportional Fairness (PF),proposed in the seminal work of Nash [30]. PF computes a competitive equilibrium between the set of jobs and theresources, where it sets a price for each resource and every job buys the best bundle of resources it can afford. Theanalysis then proceeds by exploiting many interesting aspects of competitive equilibrium such as the existence ofthe market clearing prices, and connections to the Eisenberg-Gale convex program and its dual. In hindsight, themarket equilibrium concepts seem to give an intuitive and systematic approach to the design of algorithms for themultidimensional settings.

Another important class of multidimensional scheduling problems that arise in data centers are packing virtualmachines (VM). Each VM is some subset of CPU, memory, bandwidth, etc., and the goal is to pack VMs efficientlyon a given cluster. VM packing problem is a special case of the vector scheduling problem: A set of vectors arriveonline, and the goal is to assign them to a set of machines so as to minimize makespan (that is the largest load on anydimension). Although the problem has been extensively studied for more than 15 years, before our work tight resultswere not known. Using a potential function argument, we designed an algorithm with tight upper and lower boundsfor the problem.

The work on PSP is joint with Sungjin Im, Kamesh Munagala, Ben Moseley and appeared in (STOC’14 [22]),(FOCS’15 [23]), (APPROX ’16, [21]). The work on vector scheduling problem is joint with Sungjin Im, Nat Kell,Debmalya Panigrahi and appeared in (FOCS’15 [23]). Some of the new results on this topic are in submission [20].

Practice At Microsoft Research Redmond, I spent a significant fraction of time working to improve efficiency ofreal cluster schedulers, in particular Yarn. The main goal of these projects was to identify high level gaps in theexisting cluster schedulers that result in significant performance loss, and design theoretically inspired algorithms toovercome them. We showed three ways in which existing schedulers can be improved substantially.

Temporal Fairness The two most important considerations in cluster scheduling are fairness and cluster utiliza-tion. A big drawback of existing cluster schedulers is that they ignore the temporal properties of jobs and enforcefairness instantaneously, which leads to a significant underutilization of the cluster. In contrast, jobs only care for thetemporal properties, in particular completion times. In my work with Goiri et al [18] (under submission), we intro-duced temporal relaxations of fairness, a conceptually new view of fairness in cluster scheduling setting. Further, wedeveloped the first cluster scheduler with provable performance guarantees that improves performance while ensuringfairness on job completion times.

Scheduling DAGs The main observation behind this project is that jobs in production clusters have complexdependencies between them (represented by DAGs), and the state of the art cluster schedulers, being instantaneous,completely ignore dependencies while making scheduling decisions, leading to a huge loss in efficiency. In collab-oration with Grandl et al (OSDI’16 [19]), we developed Graphene, a scheduler that does makespan and completiontime optimization that takes into account dependencies among jobs, resulting in huge improvements in efficiency.

Scheduling Periodic Jobs In an industrial set up, many jobs are periodic that need to be completed within certainintervals of time. An algorithm can exploit this periodicity to plan ahead and schedule the jobs efficiently. This wasour key idea behind Morpheus (Abdu Jyothi et al, OSDI’16 [25]), a cluster scheduler that does long-term planningfor periodic jobs by scheduling them based on their machine preferences (for example, location of data) so thatnon-periodic jobs that arrive online can be efficiently packed.

We have open-sourced some of this work by implementing in Yarn, the resource manager of Hadoop.

4

Page 5: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

2.2 Routing in Reconfigurable Data Centers

Theory One of the emerging technologies to connect servers within a data center is to use light (laser). An advantageof such an approach is that as traffic between servers changes over time topology can be reconfigured. In such contexts,the Birkhoff-von Nuemann decomposition theorem (BVN), a beautiful result from matching theory, has been widelyused to route traffic among servers [33, 31]. The BVN theorem states that any doubly stochastic matrix can berepresented as a convex combination of permutation matrices; In graph theoretic language, a fractional matching in abipartite graph can be represented as a convex combination of integral matchings.

In routing applications, a doubly stochastic matrix represents traffic that needs to be routed among a set of servers,and a BVN decomposition gives a schedule to route traffic. Switching between matchings, however, involves reconfig-uring hardware that comes at a cost. Consequently, finding a decomposition with few permutation matrices improvesperformance [33]. As BVN decomposition is not unique, this raises an intriguing question: Can we find BVN de-compositions with small number of permutation matrices? It turns out the problem is NP hard, so, in my work withEuiwoong Lee and Mohit Singh [26], we designed the first logarithmic approximation algorithm for the problem. Ourresult is obtained by solving a more general problem called generalized edge coloring, which we solved by formu-lating it as a linear program and then doing a randomized rounding. Finally, we appeal to the Lovasz Local Lemma(LLL) to argue about the approximation factor of the algorithm.

Practice In colloboration with Ghobadi et al, (SIGCOMM’16 [16]), I also studied the online version of the problem.In the online setting, a stream of packets arrive online and the routing decision at each time step is to find a matchingbetween servers such that average completion-time/flow-time of packets is minimized. Our online algorithm forthe problem finds a stable matching (also called stable marriage algorithm) among servers, where the preferencesbetween a pair of servers is equal to the total number of packets that needs to be routed between the servers. Ourexperiments show that the algorithm outperforms previously used algorithms for the problem for the objective ofminimizing average completion-time/flow-time of packets. We also prove that the algorithm is constant competitiveusing duality; in fact, the algorithm can be derived from a primal-dual argument.

3 Price of Anarchy Analysis via Duality

Another source of uncertainty that arises in data center settings is the strategic behavior by users. Even in clustersmanaged by a single entity, evidence (see for example [17]) shows that users manipulate their input to get a largershare of resources. Thus, it is imperative that algorithms used in such settings do not degrade in performance dueto the manipulations by selfish users. The most widely used metric to measure this degradation is price of anarchy,which measures the inefficiency that results due to selfish behavior compared to the optimal solution without anyselfish behavior. A smaller PoA value implies that the algorithm does well even when users are selfish.

In a series of papers [7, 8, 27], I studied scheduling and routing problems in selfish settings, and designed algo-rithms with small PoA. In the process of obtaining these results, I introduced a new framework to analyze PoA viadual-fitting. The key idea in this approach is to exploit the properties of equilibrium outcomes to construct a dual so-lution that gives the bound on PoA via the weak duality theorem. The dual-fitting technique is a natural tool for PoAanalysis, yet, before our work it was not used in the PoA literature. In a recent work with Vahab Mirrokni (SODA’15[27]), I showed the broad applicability of the framework by giving alternate proofs of several classical PoA resultsknown in the literature including congestion games, facility location games, and simultaneous second price auctions.Another recent work by Cai et al [12] applies the dual-fitting idea for mechanism design problems, and showed thatmany mechanisms [3] can be analyzed using the duality approach.

These results hold promise for unifying the price of anarchy and mechanism design literatures with that of therich literature on linear and convex programing based methods. I am excited to be a part of this process, and I amexploring these connections.

5

Page 6: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

4 Future Research

While I will continue to work on some specific questions in my current areas of research, my vision for the future is:

To find unifying algorithm design principles that will lead to simple, fast, and efficient algorithms for solvingproblems that arise in building data centers and applications in the cloud.

In particular, I will continue to strive for a balance between working on fundamental theoretical problems and theproblems that are of immense practical interest. Below I would like to describe two broad research directions that Iam excited about.

Beyond Resource Allocation View of Cluster Scheduling Much of the cluster scheduling literature, both in theoryand in practice, has taken a resource allocation view of the problem. Most algorithms for these problems, be it DRF[17] – the fair scheduler used in Hadoop– or PF proposed in my work [22], make instantaneous decisions at each timestep on how to partition resources such as CPU, memory, bandwidth, etc., among jobs without taking into accountthe temporal properties such as processing lengths of jobs or dependencies among tasks (DAGs). This view has itsmerits: conceptually it is easier to think and reason about instantaneous allocations, and there is also a vast literatureto rely upon both in computer science and in economics. As I showed recently, however, the resource allocation viewof cluster scheduling leads to a substantial degradation in performance [18]. Aside from performance considerations,another drawback of this view is that properties that hold instantaneously do not extend temporally. For example, DRFand PF are instantaneously Pareto efficient, but are not Pareto efficient on completion times of jobs; same is true fortruthfulness. In my opinion, this is a fundamental limitation of the resource allocation view of cluster scheduling. In arecent project [18], we proposed first steps in moving beyond the resource allocation view and focussing on temporalaspects; In particular, guaranteeing fairness and quality of service at job completion times instead of at each time step.

The temporal view of cluster scheduling provably improves performance but poses new algorithmic and concep-tual questions. How do we define what is a fair allocation over a period of time? How do we handle online job arrivalsin the definition of fairness? It also raises fundamental algorithmic questions: How do we estimate job parameterssuch as run times? How do we learn the dependency structure among tasks of a job? How do we schedule DAGsefficiently? Solving these problems will require ideas from fair allocation literature, economics, and learning theory.

On the optimization front, even when there is only one resource type our understanding of scheduling jobs thathave dependencies is limited. On the other hand, cluster scheduling problems are fundamentally multidimensional innature, thereby adding another layer of complexity to already hard problems. Making progress on these problems willrequire substantially new ideas, and one of my main goals is to develop an algorithmic toolkit to solve these problems.An approach that holds promise in attacking these problems is the lift and project methods or LP hierarchies. NaturalLP relaxations for scheduling problems in presence of dependencies usually have large integrality gaps. LP hierarchiesoffer a systematic way of strengthening a relaxation (reducing the integrality gap) by adding more constraints. Iam actively exploring lift-and-project methods for solving precedence constrained scheduling problems, which haveremained open for more than two decades.

An alternate approach I am exploring that can also lead to new insights on these problems is to move beyondthe worst case analysis and understand the hardness of the problems in semi-random models or on stable instances[9]. These new models have proven successful in designing simple algorithms that work well in practice for manyproblems that are provably hard in the worst case sense [4, 29].

Interplay Between Online Learning Theory and Online Algorithms Both online algorithms and learning algo-rithms address the same core issue: how to make decisions in the face of uncertainty. Yet, their view of uncertainty isvastly different: Online algorithms assume that input is adversarial, and hence optimize for the worst case scenario;in contrast, learning algorithms assume that input is coming from an unknown distribution that can be learned. Whilethey took rather different approaches in the beginning, in recent years, we are beginning to see interplay of ideas fromthese two areas.

Since much of my work is in the area of online algorithms, I will talk about how ideas from learning theory areinfluencing the design and analysis of online algorithms. First, techniques from learning theory have great potential forresolving important open problems in the online algorithms literature. For example, many ideas used in the design and

6

Page 7: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

analysis of algorithms for Metrical Task Systems (MTS) [15] or paging [5] can be interpreted, in hindsight, throughlearning theoretic ideas such as regularization. See, for example, elegant papers by Buchbinder et al [10, 11, 2].These ideas may prove useful for making progress on long standing open problems such as the K-server problem.

Second, there are several problems that are traditionally studied in the realm of online algorithms that can benefitfrom a learning theoretic perspective. I would like to explain this using a concrete example. An important problemfaced by cloud service providers such as Amazon Elastic Cloud or Microsoft Azure is virtual machines scheduling(VMs). In this problem, jobs (users) want to rent a certain number of VMs, and the cloud service provider needs todecide whether to offer the service or not. If it offers the service to a client, then it also needs to schedule the job.The goal of a service provider is to design an algorithm that maximizes the total value. The problem has a long lineof work in the competitive analysis framework. A set of jobs arrive online, and each job has a certain value vj , aninterval [rj , dj ] where it needs to be scheduled, and a processing length pj . The goal is to maximize

∑j vj of the

scheduled jobs. In the adversarial setting, however, no online algorithm can achieve a good approximation factor (orcompetitive ratio).

If one takes a closer look at the problem, it becomes clear that the model studied in the online algorithms literatureis not the right one for the cloud setting. In the cloud setting, a service provider first sets a price, then accepts thejobs that are willing to pay that price subject to the availability of resources. So, the goal is to find the best pricethat maximizes the total value collected. Therefore, it is more natural to view this as a learning problem, where theonline algorithm has to learn the optimal price that maximizes the total value collected. Once our algorithm has thisstructure – that it first declares a price – it is also natural to ask if the users have any incentive to misreport their values.This adds an economic angle to the learning question, where we want algorithms that are incentive compatible. In arecent work [13] we showed an algorithm that learns the optimal price that is also incentive compatible; in particular,it achieves sub-linear regret on the total value.

The main takeaway from this example is that the approach of looking at traditional online problems, especiallythe ones with strong lowerbounds, from a learning theoretic angle can lead to better algorithms for dealing withuncertainty. With abundance of data, this may also be the right approach to model uncertainty and move beyond theworst case analysis.

References

[1] Apache hadoop yarn. https://hadoop.apache.org/docs/r2.7.3/hadoop-yarn/hadoop-yarn-site/YARN.html.

[2] Jacob D. Abernethy, Peter L. Bartlett, Niv Buchbinder, and Isabelle Stanton. A regularization approach tometrical task systems. In ALT, 2010.

[3] Moshe Babaioff, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg. A simple and approximatelyoptimal mechanism for an additive buyer. In FOCS, 2014.

[4] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta. Clustering under approximation stability. J. ACM,2013.

[5] Nikhil Bansal, Niv Buchbinder, and Joseph Naor. A primal-dual randomized algorithm for weighted paging. InJ. ACM, 2012.

[6] Nikhil Bansal and Janardhan Kulkarni. Minimizing flow-time on unrelated machines. In STOC, 2015.

[7] Sayan Bhattacharya, Sungjin Im, Janardhan Kulkarni, and Kamesh Munagala. Coordination mechanisms from(almost) all scheduling policies. In ITCS, 2014.

[8] Sayan Bhattacharya, Janardhan Kulkarni, and Vahab S. Mirrokni. Coordination mechanisms for selfish routingover time on a tree. In ICALP, 2014.

[9] Yonatan Bilu and Nathan Linial. Are stable instances easy? Combinatorics, Probability & Computing, 2012.

[10] Niv Buchbinder, Shahar Chen, and Joseph Naor. Competitive analysis via regularization. In SODA, 2014.

7

Page 8: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

[11] Niv Buchbinder, Shahar Chen, Joseph Naor, and Ohad Shamir. Unified algorithms for online learning andcompetitive analysis. In COLT, 2012.

[12] Yang Cai, Nikhil R. Devanur, and S. Matthew Weinberg. A duality based unified approach to bayesian mecha-nism design. In STOC, 2016.

[13] Shuchi Chawla, Nikhil Devanur, Janardhan Kulkarni, and Rad Niazadeh. No regret scheduling. Preprint 2017,2017.

[14] Nikhil Devanur and Janardhan Kulkarni. Minimizing general delay costs on unrelated machines. SubmittedSTOC 2017.

[15] Amos Fiat and Manor Mendel. Better algorithms for unfair metrical task systems and applications. In STOC,2000.

[16] Monia Ghobadi, Ratul Mahajan, Amar Phanishayee, Nikhil R. Devanur, Janardhan Kulkarni, Gireeja Ranade,Pierre-Alexandre Blanche, Houman Rastegarfar, Madeleine Glick, and Daniel C. Kilper. Projector: Agile re-configurable data center interconnect. In SIGCOMM, 2016.

[17] Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, and Ion Stoica. Dominantresource fairness: Fair allocation of multiple resource types. In NSDI, 2011.

[18] Inigo Goiri, Angela Jiang, Janardhan Kulkarni, and Ishai Menache Nikhil Devanur and, Srikanth Kandula.Temporal relaxations of instantaneous fairness. Submitted 2017.

[19] Robert Grandl, Srikanth Kandula, Sriram Rao, Aditya Akella, and Janardhan Kulkarni. Packing anddependency-aware scheduling for data-parallel clusters. In OSDI, 2016.

[20] Sungjin Im, Nathaniel Kell, Janardhan Kulkarni, Debmalya Panigrahi, and Maryam Shadloo. New results ononline vector scheduling. Submitted 2017.

[21] Sungjin Im, Janardhan Kulkarni, Benjamin Moseley, and Kamesh Munagala. A competitive flow time algorithmfor heterogeneous clusters under polytope constraints. In APPROX/RANDOM, 2016.

[22] Sungjin Im, Janardhan Kulkarni, and Kamesh Munagala. Competitive algorithms from competitive equilibria:Non-clairvoyant scheduling under polyhedral constraints. In STOC, 2014.

[23] Sungjin Im, Janardhan Kulkarni, and Kamesh Munagala. Competitive flow-time algorithms for polyhedralscheduling. In FOCS, 2015.

[24] Sungjin Im, Janardhan Kulkarni, Kamesh Munagala, and Kirk Pruhs. SELFISHMIGRATE: A scalable algorithmfor non-clairvoyantly scheduling heterogeneous processors. In FOCS, 2014.

[25] Sangeetha Abdu Jyothi, Carlo Curino, Ishai Menache, Shravan Matthur Narayanamurthy, Alexey Tumanov,Jonathan Yaniv, Ruslan Mavlyutov, igo Goiri, Subramaniam Venkatraman Krishnan, Janardhan Kulkarni, andSriram Rao. Morpheus: Towards automated slos for enterprise clusters. In OSDI, 2016.

[26] Janardhan Kulkarni, Euiwoong Lee, and Mohit Singh. Minimum Birkhoff-von Nuemann decompositions. Sub-mitted to FOCS 2017.

[27] Janardhan Kulkarni and Vahab Mirrokni. Robust price of anarchy via LP and Fenchel duality. In SODA, 2015.

[28] Jan Karel Lenstra, David B. Shmoys, and Eva Tardos. Approximation algorithms for scheduling unrelatedparallel machines. Math. Program., 1990.

[29] Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan. Bilu-linial stable instances of maxcut and minimum multiway cut. In SODA, 2014.

8

Page 9: Research Statement - Duke Universitykulkarni/ResearchStatement.pdf · Research Statement Janardhan Kulkarni ... bandwidth, etc., and the goal is to pack VMs efficiently on a given

[30] John F Nash Jr. The bargaining problem. Econometrica: Journal of the Econometric Society, 1950.

[31] George Porter, Richard D. Strong, Nathan Farrington, Alex Forencich, Pang-Chen Sun, Tajana Rosing, Yesha-iahu Fainman, George Papen, and Amin Vahdat. Integrating microsecond circuit switching into the data center.In SIGCOMM, 2013.

[32] Tim Roughgarden. Intrinsic robustness of the price of anarchy. J. ACM, 2015.

[33] Shaileshh Bojja Venkatakrishnan, Mohammad Alizadeh, and Pramod Viswanath. Costly circuits, submodularschedules and approximate caratheodory theorems. In SIGMETRICS, 2016.

9