pepperdata_cs_opower

2

Click here to load reader

Upload: antonio-santini

Post on 12-Apr-2017

92 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Pepperdata_CS_Opower

. ............................................................ ...............................................

................................................................................................................................................................................................... ...................................................................................................................................................................................................

...................................................................................................................................................................................................

...................................................................................................................................................................................................

................................................................................................................................................................................................... .........................................................................................................................................................................

............................................................................................................................................................................

CASE STUDY

THE GLOBAL LEADER IN CLOUD-BASED SOFTWARE FOR THE UTILITY INDUSTRY

“Pepperdata ensures that our production Hadoop environment remains reliable enough to handle all of our diverse workloads so we can meet our SLAs with confidence.” - Eric Chang, Data Infrastructure Technology Lead, Opower

Opower is a leading provider of cloud-based software to the utility industry and is transforming the way utilities relate to their customers. By combining data management, insightful analytics, and behavioral science, Opower’s customer engagement platform positions utilities as trusted energy advisorvs to the customers they serve. To date, the Opower platform has created enough energy savings through behavior change to power all the homes in a city of 1 million people for a year.

Opower uses Hadoop as the infrastructure foundation for its data warehouse and its operational data store. The Software as a Service (SaaS) company relies on Hadoop to deliver big data analytics to its more than 95 utility partners, including 28 of the 50 largest U.S. electric utilities, and over 50 million household and business customers in nine countries. The multi-tenant production environment runs mixed workloads that include Hive queries, MapReduce jobs, and HBase serving data in real time to end users.

KEY CHALLENGES

Eric Chang, Data Infrastructure Technology Lead at Opower, recalls the early days of their Hadoop deployment. “As with many initial Hadoop deployments, stability was a big challenge for us.” Jobs would compete with each other for physical resources. When cluster capacity was exceeded, the Hadoop cluster would experience cascading failures and critical applications would become unavailable.

Annual Revenue$88.7mm (2013)

Business NeedsBig data platform to support cloud-based analytics for more than 95 utility partners and their customers

Scalability to handle hundreds of billions of data points from energy meters, third-party data feeds, and event data

Key ChallengesMaintaining stable environment to predictably meet SLAs

Diagnosing performance problems in a timely manner

Minimizing hardware expenses

Solution & Results Pepperdata Supervisor real-time cluster optimizer:

• Hive queries, HBase, and MapReduce jobs run reliably on the same multi-tenant production cluster

• SLAs are met with confidence• Increased throughput, fewer

hardware resources required

Page 2: Pepperdata_CS_Opower

CASE STUDY

pepperdata.com

Chang and his team took a number of steps to mitigate these issues through extensive capacity planning, cluster tuning, prioritized job scheduling, and the careful curation and testing of jobs before release to production. In spite of adopting these best practices, Chang saw a need to do more.

The volume of data from energy meters, third party data feeds, and event data was growing rapidly. At the same time, the number of users and diversity of workloads continued to increase. In the face of these challenges, meeting SLAs and cost-effectively managing hardware expenses were of critical importance.

MORE VISIBILITY, CONTROL, AND CAPACITY

Opower began using Pepperdata Supervisor in the spring of 2014. Installation took less than an hour, with no modifications to Opower’s schedulers, workflow, or jobs.

Opower’s Hadoop administrators now have the ability to monitor every facet of cluster performance in real time. Visibility into CPU, memory, disk I/O, and network usage by job, task, user, and group makes it easier to proactively identify potential performance problems before they occur and take preventive actions.

When performance bottlenecks do occur, Opower’s Hadoop administrators are able to quickly diagnose and fix the problem. Troubleshooting activities that used to take days are now typically completed in a matter of minutes.

Opower is also using Pepperdata Supervisor to dynamically adjust job resources to reflect their service level priorities, ensuring that their clusters are devoting sufficient resources to appropriate jobs. As a result, critical jobs now run faster, more reliably, and more efficiently on Opower’s existing servers, allowing the infrastructure team to scale services with fewer hardware resources.

Copyright © 2014 Pepperdata, Inc. All rights reserved. PEPPERDATA and the logo

are registered trademarks of Pepperdata, Inc. The other trademarks are trademarks of

Pepperdata, Inc.

Pepperdata’s products and services may be covered by U.S. Patent Nos. 8,706,798 and

8,849,891, as well as other patents that are pending.